00:00:00.000 Started by upstream project "autotest-per-patch" build number 132503 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.025 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:09.934 The recommended git tool is: git 00:00:09.934 using credential 00000000-0000-0000-0000-000000000002 00:00:09.936 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:09.948 Fetching changes from the remote Git repository 00:00:09.952 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:09.969 Using shallow fetch with depth 1 00:00:09.969 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:09.969 > git --version # timeout=10 00:00:09.980 > git --version # 'git version 2.39.2' 00:00:09.980 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:09.993 Setting http proxy: proxy-dmz.intel.com:911 00:00:09.993 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:15.534 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:15.546 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:15.560 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:15.560 > git config core.sparsecheckout # timeout=10 00:00:15.570 > git read-tree -mu HEAD # timeout=10 00:00:15.585 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:15.615 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:15.615 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:15.696 [Pipeline] Start of Pipeline 00:00:15.713 [Pipeline] library 00:00:15.715 Loading library shm_lib@master 00:00:15.715 Library shm_lib@master is cached. Copying from home. 00:00:15.734 [Pipeline] node 00:00:30.744 Still waiting to schedule task 00:00:30.745 Waiting for next available executor on ‘vagrant-vm-host’ 00:11:36.607 Running on VM-host-SM38 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:11:36.611 [Pipeline] { 00:11:36.625 [Pipeline] catchError 00:11:36.684 [Pipeline] { 00:11:36.717 [Pipeline] wrap 00:11:36.735 [Pipeline] { 00:11:36.742 [Pipeline] stage 00:11:36.743 [Pipeline] { (Prologue) 00:11:36.763 [Pipeline] echo 00:11:36.765 Node: VM-host-SM38 00:11:36.772 [Pipeline] cleanWs 00:11:36.786 [WS-CLEANUP] Deleting project workspace... 00:11:36.787 [WS-CLEANUP] Deferred wipeout is used... 00:11:36.795 [WS-CLEANUP] done 00:11:36.998 [Pipeline] setCustomBuildProperty 00:11:37.088 [Pipeline] httpRequest 00:11:37.407 [Pipeline] echo 00:11:37.409 Sorcerer 10.211.164.20 is alive 00:11:37.421 [Pipeline] retry 00:11:37.423 [Pipeline] { 00:11:37.435 [Pipeline] httpRequest 00:11:37.441 HttpMethod: GET 00:11:37.442 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:11:37.442 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:11:37.444 Response Code: HTTP/1.1 200 OK 00:11:37.445 Success: Status code 200 is in the accepted range: 200,404 00:11:37.445 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:11:37.593 [Pipeline] } 00:11:37.612 [Pipeline] // retry 00:11:37.621 [Pipeline] sh 00:11:37.911 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:11:37.930 [Pipeline] httpRequest 00:11:38.241 [Pipeline] echo 00:11:38.243 Sorcerer 10.211.164.20 is alive 00:11:38.256 [Pipeline] retry 00:11:38.259 [Pipeline] { 00:11:38.274 [Pipeline] httpRequest 00:11:38.279 HttpMethod: GET 00:11:38.280 URL: http://10.211.164.20/packages/spdk_f1dd81af35d0bc4d9f5dce5ca525d9ba3cf32cd3.tar.gz 00:11:38.281 Sending request to url: http://10.211.164.20/packages/spdk_f1dd81af35d0bc4d9f5dce5ca525d9ba3cf32cd3.tar.gz 00:11:38.282 Response Code: HTTP/1.1 200 OK 00:11:38.284 Success: Status code 200 is in the accepted range: 200,404 00:11:38.284 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_f1dd81af35d0bc4d9f5dce5ca525d9ba3cf32cd3.tar.gz 00:11:42.546 [Pipeline] } 00:11:42.567 [Pipeline] // retry 00:11:42.577 [Pipeline] sh 00:11:42.901 + tar --no-same-owner -xf spdk_f1dd81af35d0bc4d9f5dce5ca525d9ba3cf32cd3.tar.gz 00:11:46.211 [Pipeline] sh 00:11:46.494 + git -C spdk log --oneline -n5 00:11:46.494 f1dd81af3 nvme: add spdk_nvme_poll_group_get_fd_group() 00:11:46.494 4da34a829 thread: fd_group-based interrupts 00:11:46.494 10ec63d4e thread: move interrupt allocation to a function 00:11:46.494 393e80fcd util: add method for setting fd_group's wrapper 00:11:46.495 1e9cebf19 util: multi-level fd_group nesting 00:11:46.520 [Pipeline] writeFile 00:11:46.539 [Pipeline] sh 00:11:46.825 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:11:46.838 [Pipeline] sh 00:11:47.124 + cat autorun-spdk.conf 00:11:47.124 SPDK_RUN_FUNCTIONAL_TEST=1 00:11:47.124 SPDK_TEST_NVMF=1 00:11:47.124 SPDK_TEST_NVMF_TRANSPORT=tcp 00:11:47.124 SPDK_TEST_USDT=1 00:11:47.124 SPDK_TEST_NVMF_MDNS=1 00:11:47.125 SPDK_RUN_UBSAN=1 00:11:47.125 NET_TYPE=virt 00:11:47.125 SPDK_JSONRPC_GO_CLIENT=1 00:11:47.125 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:11:47.133 RUN_NIGHTLY=0 00:11:47.135 [Pipeline] } 00:11:47.150 [Pipeline] // stage 00:11:47.167 [Pipeline] stage 00:11:47.170 [Pipeline] { (Run VM) 00:11:47.186 [Pipeline] sh 00:11:47.472 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:11:47.472 + echo 'Start stage prepare_nvme.sh' 00:11:47.472 Start stage prepare_nvme.sh 00:11:47.472 + [[ -n 1 ]] 00:11:47.472 + disk_prefix=ex1 00:11:47.472 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:11:47.472 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:11:47.472 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:11:47.472 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:11:47.472 ++ SPDK_TEST_NVMF=1 00:11:47.472 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:11:47.472 ++ SPDK_TEST_USDT=1 00:11:47.472 ++ SPDK_TEST_NVMF_MDNS=1 00:11:47.472 ++ SPDK_RUN_UBSAN=1 00:11:47.472 ++ NET_TYPE=virt 00:11:47.472 ++ SPDK_JSONRPC_GO_CLIENT=1 00:11:47.472 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:11:47.472 ++ RUN_NIGHTLY=0 00:11:47.472 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:11:47.472 + nvme_files=() 00:11:47.472 + declare -A nvme_files 00:11:47.472 + backend_dir=/var/lib/libvirt/images/backends 00:11:47.472 + nvme_files['nvme.img']=5G 00:11:47.472 + nvme_files['nvme-cmb.img']=5G 00:11:47.472 + nvme_files['nvme-multi0.img']=4G 00:11:47.472 + nvme_files['nvme-multi1.img']=4G 00:11:47.472 + nvme_files['nvme-multi2.img']=4G 00:11:47.472 + nvme_files['nvme-openstack.img']=8G 00:11:47.472 + nvme_files['nvme-zns.img']=5G 00:11:47.472 + (( SPDK_TEST_NVME_PMR == 1 )) 00:11:47.472 + (( SPDK_TEST_FTL == 1 )) 00:11:47.472 + (( SPDK_TEST_NVME_FDP == 1 )) 00:11:47.472 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:11:47.472 + for nvme in "${!nvme_files[@]}" 00:11:47.472 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:11:47.472 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:11:47.472 + for nvme in "${!nvme_files[@]}" 00:11:47.472 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:11:47.472 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:11:47.472 + for nvme in "${!nvme_files[@]}" 00:11:47.472 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:11:47.472 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:11:47.472 + for nvme in "${!nvme_files[@]}" 00:11:47.472 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:11:47.472 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:11:47.472 + for nvme in "${!nvme_files[@]}" 00:11:47.472 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:11:47.473 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:11:47.473 + for nvme in "${!nvme_files[@]}" 00:11:47.473 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:11:47.733 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:11:47.733 + for nvme in "${!nvme_files[@]}" 00:11:47.733 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:11:47.733 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:11:47.733 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:11:47.733 + echo 'End stage prepare_nvme.sh' 00:11:47.733 End stage prepare_nvme.sh 00:11:47.748 [Pipeline] sh 00:11:48.034 + DISTRO=fedora39 00:11:48.034 + CPUS=10 00:11:48.034 + RAM=12288 00:11:48.034 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:11:48.034 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora39 00:11:48.034 00:11:48.034 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:11:48.034 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:11:48.034 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:11:48.034 HELP=0 00:11:48.034 DRY_RUN=0 00:11:48.034 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:11:48.034 NVME_DISKS_TYPE=nvme,nvme, 00:11:48.034 NVME_AUTO_CREATE=0 00:11:48.034 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:11:48.034 NVME_CMB=,, 00:11:48.034 NVME_PMR=,, 00:11:48.034 NVME_ZNS=,, 00:11:48.034 NVME_MS=,, 00:11:48.034 NVME_FDP=,, 00:11:48.034 SPDK_VAGRANT_DISTRO=fedora39 00:11:48.034 SPDK_VAGRANT_VMCPU=10 00:11:48.034 SPDK_VAGRANT_VMRAM=12288 00:11:48.034 SPDK_VAGRANT_PROVIDER=libvirt 00:11:48.034 SPDK_VAGRANT_HTTP_PROXY= 00:11:48.034 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:11:48.034 SPDK_OPENSTACK_NETWORK=0 00:11:48.034 VAGRANT_PACKAGE_BOX=0 00:11:48.034 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:11:48.034 FORCE_DISTRO=true 00:11:48.034 VAGRANT_BOX_VERSION= 00:11:48.034 EXTRA_VAGRANTFILES= 00:11:48.034 NIC_MODEL=e1000 00:11:48.034 00:11:48.034 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt' 00:11:48.034 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:11:50.582 Bringing machine 'default' up with 'libvirt' provider... 00:11:50.842 ==> default: Creating image (snapshot of base box volume). 00:11:51.104 ==> default: Creating domain with the following settings... 00:11:51.104 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732536646_fbd9d9b30ae5998d9be6 00:11:51.104 ==> default: -- Domain type: kvm 00:11:51.104 ==> default: -- Cpus: 10 00:11:51.104 ==> default: -- Feature: acpi 00:11:51.104 ==> default: -- Feature: apic 00:11:51.104 ==> default: -- Feature: pae 00:11:51.104 ==> default: -- Memory: 12288M 00:11:51.104 ==> default: -- Memory Backing: hugepages: 00:11:51.104 ==> default: -- Management MAC: 00:11:51.104 ==> default: -- Loader: 00:11:51.104 ==> default: -- Nvram: 00:11:51.104 ==> default: -- Base box: spdk/fedora39 00:11:51.104 ==> default: -- Storage pool: default 00:11:51.104 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732536646_fbd9d9b30ae5998d9be6.img (20G) 00:11:51.104 ==> default: -- Volume Cache: default 00:11:51.104 ==> default: -- Kernel: 00:11:51.104 ==> default: -- Initrd: 00:11:51.104 ==> default: -- Graphics Type: vnc 00:11:51.104 ==> default: -- Graphics Port: -1 00:11:51.104 ==> default: -- Graphics IP: 127.0.0.1 00:11:51.104 ==> default: -- Graphics Password: Not defined 00:11:51.104 ==> default: -- Video Type: cirrus 00:11:51.104 ==> default: -- Video VRAM: 9216 00:11:51.104 ==> default: -- Sound Type: 00:11:51.104 ==> default: -- Keymap: en-us 00:11:51.104 ==> default: -- TPM Path: 00:11:51.104 ==> default: -- INPUT: type=mouse, bus=ps2 00:11:51.104 ==> default: -- Command line args: 00:11:51.104 ==> default: -> value=-device, 00:11:51.104 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:11:51.104 ==> default: -> value=-drive, 00:11:51.104 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:11:51.104 ==> default: -> value=-device, 00:11:51.104 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:11:51.104 ==> default: -> value=-device, 00:11:51.104 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:11:51.104 ==> default: -> value=-drive, 00:11:51.104 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:11:51.104 ==> default: -> value=-device, 00:11:51.104 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:11:51.104 ==> default: -> value=-drive, 00:11:51.104 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:11:51.104 ==> default: -> value=-device, 00:11:51.104 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:11:51.104 ==> default: -> value=-drive, 00:11:51.104 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:11:51.104 ==> default: -> value=-device, 00:11:51.104 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:11:51.104 ==> default: Creating shared folders metadata... 00:11:51.104 ==> default: Starting domain. 00:11:52.485 ==> default: Waiting for domain to get an IP address... 00:12:10.622 ==> default: Waiting for SSH to become available... 00:12:10.622 ==> default: Configuring and enabling network interfaces... 00:12:12.534 default: SSH address: 192.168.121.207:22 00:12:12.534 default: SSH username: vagrant 00:12:12.534 default: SSH auth method: private key 00:12:14.508 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:12:21.111 ==> default: Mounting SSHFS shared folder... 00:12:23.023 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:12:23.023 ==> default: Checking Mount.. 00:12:23.593 ==> default: Folder Successfully Mounted! 00:12:23.941 00:12:23.941 SUCCESS! 00:12:23.941 00:12:23.941 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:12:23.941 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:12:23.941 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:12:23.941 00:12:23.951 [Pipeline] } 00:12:23.965 [Pipeline] // stage 00:12:23.974 [Pipeline] dir 00:12:23.975 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt 00:12:23.977 [Pipeline] { 00:12:23.989 [Pipeline] catchError 00:12:23.991 [Pipeline] { 00:12:24.004 [Pipeline] sh 00:12:24.286 + vagrant ssh-config --host vagrant 00:12:24.286 + tee ssh_conf 00:12:24.286 + sed -ne '/^Host/,$p' 00:12:27.588 Host vagrant 00:12:27.588 HostName 192.168.121.207 00:12:27.588 User vagrant 00:12:27.588 Port 22 00:12:27.588 UserKnownHostsFile /dev/null 00:12:27.588 StrictHostKeyChecking no 00:12:27.588 PasswordAuthentication no 00:12:27.588 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:12:27.588 IdentitiesOnly yes 00:12:27.588 LogLevel FATAL 00:12:27.588 ForwardAgent yes 00:12:27.588 ForwardX11 yes 00:12:27.588 00:12:27.606 [Pipeline] withEnv 00:12:27.609 [Pipeline] { 00:12:27.626 [Pipeline] sh 00:12:27.974 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:12:27.974 source /etc/os-release 00:12:27.974 [[ -e /image.version ]] && img=$(< /image.version) 00:12:27.974 # Minimal, systemd-like check. 00:12:27.974 if [[ -e /.dockerenv ]]; then 00:12:27.974 # Clear garbage from the node'\''s name: 00:12:27.974 # agt-er_autotest_547-896 -> autotest_547-896 00:12:27.974 # $HOSTNAME is the actual container id 00:12:27.974 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:12:27.974 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:12:27.974 # We can assume this is a mount from a host where container is running, 00:12:27.974 # so fetch its hostname to easily identify the target swarm worker. 00:12:27.974 container="$(< /etc/hostname) ($agent)" 00:12:27.974 else 00:12:27.974 # Fallback 00:12:27.974 container=$agent 00:12:27.974 fi 00:12:27.974 fi 00:12:27.974 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:12:27.974 ' 00:12:27.989 [Pipeline] } 00:12:28.006 [Pipeline] // withEnv 00:12:28.015 [Pipeline] setCustomBuildProperty 00:12:28.031 [Pipeline] stage 00:12:28.034 [Pipeline] { (Tests) 00:12:28.053 [Pipeline] sh 00:12:28.337 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:12:28.924 [Pipeline] sh 00:12:29.208 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:12:29.487 [Pipeline] timeout 00:12:29.487 Timeout set to expire in 1 hr 0 min 00:12:29.489 [Pipeline] { 00:12:29.505 [Pipeline] sh 00:12:29.790 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:12:31.257 HEAD is now at f1dd81af3 nvme: add spdk_nvme_poll_group_get_fd_group() 00:12:31.270 [Pipeline] sh 00:12:31.554 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:12:31.828 [Pipeline] sh 00:12:32.106 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:12:32.384 [Pipeline] sh 00:12:32.668 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo' 00:12:33.242 ++ readlink -f spdk_repo 00:12:33.242 + DIR_ROOT=/home/vagrant/spdk_repo 00:12:33.242 + [[ -n /home/vagrant/spdk_repo ]] 00:12:33.242 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:12:33.242 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:12:33.242 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:12:33.242 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:12:33.242 + [[ -d /home/vagrant/spdk_repo/output ]] 00:12:33.242 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:12:33.242 + cd /home/vagrant/spdk_repo 00:12:33.242 + source /etc/os-release 00:12:33.242 ++ NAME='Fedora Linux' 00:12:33.242 ++ VERSION='39 (Cloud Edition)' 00:12:33.242 ++ ID=fedora 00:12:33.242 ++ VERSION_ID=39 00:12:33.242 ++ VERSION_CODENAME= 00:12:33.242 ++ PLATFORM_ID=platform:f39 00:12:33.242 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:12:33.242 ++ ANSI_COLOR='0;38;2;60;110;180' 00:12:33.242 ++ LOGO=fedora-logo-icon 00:12:33.242 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:12:33.242 ++ HOME_URL=https://fedoraproject.org/ 00:12:33.242 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:12:33.242 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:12:33.242 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:12:33.242 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:12:33.242 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:12:33.242 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:12:33.242 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:12:33.242 ++ SUPPORT_END=2024-11-12 00:12:33.242 ++ VARIANT='Cloud Edition' 00:12:33.242 ++ VARIANT_ID=cloud 00:12:33.242 + uname -a 00:12:33.242 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:12:33.242 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:12:33.502 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:33.502 Hugepages 00:12:33.502 node hugesize free / total 00:12:33.764 node0 1048576kB 0 / 0 00:12:33.764 node0 2048kB 0 / 0 00:12:33.764 00:12:33.764 Type BDF Vendor Device NUMA Driver Device Block devices 00:12:33.764 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:12:33.764 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:12:33.764 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:12:33.764 + rm -f /tmp/spdk-ld-path 00:12:33.764 + source autorun-spdk.conf 00:12:33.764 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:12:33.764 ++ SPDK_TEST_NVMF=1 00:12:33.764 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:12:33.764 ++ SPDK_TEST_USDT=1 00:12:33.764 ++ SPDK_TEST_NVMF_MDNS=1 00:12:33.764 ++ SPDK_RUN_UBSAN=1 00:12:33.764 ++ NET_TYPE=virt 00:12:33.764 ++ SPDK_JSONRPC_GO_CLIENT=1 00:12:33.764 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:12:33.764 ++ RUN_NIGHTLY=0 00:12:33.764 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:12:33.764 + [[ -n '' ]] 00:12:33.764 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:12:33.764 + for M in /var/spdk/build-*-manifest.txt 00:12:33.764 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:12:33.764 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:12:33.764 + for M in /var/spdk/build-*-manifest.txt 00:12:33.764 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:12:33.764 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:12:33.764 + for M in /var/spdk/build-*-manifest.txt 00:12:33.764 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:12:33.764 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:12:33.764 ++ uname 00:12:33.764 + [[ Linux == \L\i\n\u\x ]] 00:12:33.764 + sudo dmesg -T 00:12:33.764 + sudo dmesg --clear 00:12:33.764 + dmesg_pid=4987 00:12:33.764 + [[ Fedora Linux == FreeBSD ]] 00:12:33.764 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:33.764 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:33.764 + sudo dmesg -Tw 00:12:33.764 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:12:33.764 + [[ -x /usr/src/fio-static/fio ]] 00:12:33.764 + export FIO_BIN=/usr/src/fio-static/fio 00:12:33.764 + FIO_BIN=/usr/src/fio-static/fio 00:12:33.764 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:12:33.764 + [[ ! -v VFIO_QEMU_BIN ]] 00:12:33.764 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:12:33.764 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:33.764 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:33.764 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:12:33.764 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:33.764 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:33.764 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:12:34.026 12:11:29 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:12:34.026 12:11:29 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:12:34.026 12:11:29 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:12:34.026 12:11:29 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:12:34.026 12:11:29 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:12:34.026 12:11:29 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_USDT=1 00:12:34.026 12:11:29 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_MDNS=1 00:12:34.026 12:11:29 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:12:34.026 12:11:29 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:12:34.026 12:11:29 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_JSONRPC_GO_CLIENT=1 00:12:34.026 12:11:29 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:12:34.026 12:11:29 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:12:34.026 12:11:29 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:12:34.026 12:11:29 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:12:34.026 12:11:29 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:12:34.026 12:11:29 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:34.026 12:11:29 -- scripts/common.sh@15 -- $ shopt -s extglob 00:12:34.026 12:11:29 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:12:34.026 12:11:29 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:34.026 12:11:29 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:34.026 12:11:29 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.026 12:11:29 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.026 12:11:29 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.026 12:11:29 -- paths/export.sh@5 -- $ export PATH 00:12:34.026 12:11:29 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.026 12:11:29 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:12:34.026 12:11:29 -- common/autobuild_common.sh@493 -- $ date +%s 00:12:34.026 12:11:29 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732536689.XXXXXX 00:12:34.026 12:11:29 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732536689.wB3mwM 00:12:34.026 12:11:29 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:12:34.026 12:11:29 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:12:34.026 12:11:29 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:12:34.026 12:11:29 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:12:34.026 12:11:29 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:12:34.026 12:11:29 -- common/autobuild_common.sh@509 -- $ get_config_params 00:12:34.026 12:11:29 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:12:34.026 12:11:29 -- common/autotest_common.sh@10 -- $ set +x 00:12:34.026 12:11:29 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:12:34.026 12:11:29 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:12:34.026 12:11:29 -- pm/common@17 -- $ local monitor 00:12:34.026 12:11:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:12:34.026 12:11:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:12:34.026 12:11:29 -- pm/common@25 -- $ sleep 1 00:12:34.026 12:11:29 -- pm/common@21 -- $ date +%s 00:12:34.026 12:11:29 -- pm/common@21 -- $ date +%s 00:12:34.026 12:11:29 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732536689 00:12:34.026 12:11:29 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732536689 00:12:34.026 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732536689_collect-cpu-load.pm.log 00:12:34.026 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732536689_collect-vmstat.pm.log 00:12:34.967 12:11:30 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:12:34.967 12:11:30 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:12:34.967 12:11:30 -- spdk/autobuild.sh@12 -- $ umask 022 00:12:34.967 12:11:30 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:12:34.967 12:11:30 -- spdk/autobuild.sh@16 -- $ date -u 00:12:34.967 Mon Nov 25 12:11:30 PM UTC 2024 00:12:34.967 12:11:30 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:12:34.967 v25.01-pre-225-gf1dd81af3 00:12:34.967 12:11:30 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:12:34.967 12:11:30 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:12:34.967 12:11:30 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:12:34.967 12:11:30 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:12:34.967 12:11:30 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:12:34.967 12:11:30 -- common/autotest_common.sh@10 -- $ set +x 00:12:34.967 ************************************ 00:12:34.967 START TEST ubsan 00:12:34.967 ************************************ 00:12:34.967 using ubsan 00:12:34.967 12:11:30 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:12:34.967 00:12:34.967 real 0m0.000s 00:12:34.967 user 0m0.000s 00:12:34.967 sys 0m0.000s 00:12:34.967 ************************************ 00:12:34.967 END TEST ubsan 00:12:34.967 ************************************ 00:12:34.967 12:11:30 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:12:34.967 12:11:30 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:12:35.228 12:11:30 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:12:35.228 12:11:30 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:12:35.228 12:11:30 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:12:35.228 12:11:30 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:12:35.228 12:11:30 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:12:35.228 12:11:30 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:12:35.228 12:11:30 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:12:35.228 12:11:30 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:12:35.228 12:11:30 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang --with-shared 00:12:35.228 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:12:35.228 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:12:35.835 Using 'verbs' RDMA provider 00:12:48.748 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:13:00.976 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:13:00.976 go version go1.21.1 linux/amd64 00:13:00.976 Creating mk/config.mk...done. 00:13:00.976 Creating mk/cc.flags.mk...done. 00:13:00.976 Type 'make' to build. 00:13:00.976 12:11:56 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:13:00.976 12:11:56 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:13:00.976 12:11:56 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:13:00.976 12:11:56 -- common/autotest_common.sh@10 -- $ set +x 00:13:00.976 ************************************ 00:13:00.976 START TEST make 00:13:00.976 ************************************ 00:13:00.976 12:11:56 make -- common/autotest_common.sh@1129 -- $ make -j10 00:13:00.976 make[1]: Nothing to be done for 'all'. 00:13:15.902 The Meson build system 00:13:15.902 Version: 1.5.0 00:13:15.902 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:13:15.902 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:13:15.902 Build type: native build 00:13:15.902 Program cat found: YES (/usr/bin/cat) 00:13:15.902 Project name: DPDK 00:13:15.902 Project version: 24.03.0 00:13:15.902 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:13:15.902 C linker for the host machine: cc ld.bfd 2.40-14 00:13:15.902 Host machine cpu family: x86_64 00:13:15.902 Host machine cpu: x86_64 00:13:15.902 Message: ## Building in Developer Mode ## 00:13:15.902 Program pkg-config found: YES (/usr/bin/pkg-config) 00:13:15.902 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:13:15.902 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:13:15.902 Program python3 found: YES (/usr/bin/python3) 00:13:15.902 Program cat found: YES (/usr/bin/cat) 00:13:15.902 Compiler for C supports arguments -march=native: YES 00:13:15.902 Checking for size of "void *" : 8 00:13:15.902 Checking for size of "void *" : 8 (cached) 00:13:15.902 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:13:15.902 Library m found: YES 00:13:15.902 Library numa found: YES 00:13:15.902 Has header "numaif.h" : YES 00:13:15.902 Library fdt found: NO 00:13:15.902 Library execinfo found: NO 00:13:15.902 Has header "execinfo.h" : YES 00:13:15.902 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:13:15.902 Run-time dependency libarchive found: NO (tried pkgconfig) 00:13:15.902 Run-time dependency libbsd found: NO (tried pkgconfig) 00:13:15.902 Run-time dependency jansson found: NO (tried pkgconfig) 00:13:15.902 Run-time dependency openssl found: YES 3.1.1 00:13:15.902 Run-time dependency libpcap found: YES 1.10.4 00:13:15.902 Has header "pcap.h" with dependency libpcap: YES 00:13:15.902 Compiler for C supports arguments -Wcast-qual: YES 00:13:15.902 Compiler for C supports arguments -Wdeprecated: YES 00:13:15.902 Compiler for C supports arguments -Wformat: YES 00:13:15.902 Compiler for C supports arguments -Wformat-nonliteral: NO 00:13:15.902 Compiler for C supports arguments -Wformat-security: NO 00:13:15.902 Compiler for C supports arguments -Wmissing-declarations: YES 00:13:15.902 Compiler for C supports arguments -Wmissing-prototypes: YES 00:13:15.902 Compiler for C supports arguments -Wnested-externs: YES 00:13:15.902 Compiler for C supports arguments -Wold-style-definition: YES 00:13:15.902 Compiler for C supports arguments -Wpointer-arith: YES 00:13:15.902 Compiler for C supports arguments -Wsign-compare: YES 00:13:15.902 Compiler for C supports arguments -Wstrict-prototypes: YES 00:13:15.902 Compiler for C supports arguments -Wundef: YES 00:13:15.902 Compiler for C supports arguments -Wwrite-strings: YES 00:13:15.902 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:13:15.902 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:13:15.902 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:13:15.902 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:13:15.902 Program objdump found: YES (/usr/bin/objdump) 00:13:15.902 Compiler for C supports arguments -mavx512f: YES 00:13:15.902 Checking if "AVX512 checking" compiles: YES 00:13:15.902 Fetching value of define "__SSE4_2__" : 1 00:13:15.902 Fetching value of define "__AES__" : 1 00:13:15.902 Fetching value of define "__AVX__" : 1 00:13:15.902 Fetching value of define "__AVX2__" : 1 00:13:15.902 Fetching value of define "__AVX512BW__" : 1 00:13:15.902 Fetching value of define "__AVX512CD__" : 1 00:13:15.902 Fetching value of define "__AVX512DQ__" : 1 00:13:15.902 Fetching value of define "__AVX512F__" : 1 00:13:15.902 Fetching value of define "__AVX512VL__" : 1 00:13:15.902 Fetching value of define "__PCLMUL__" : 1 00:13:15.902 Fetching value of define "__RDRND__" : 1 00:13:15.902 Fetching value of define "__RDSEED__" : 1 00:13:15.902 Fetching value of define "__VPCLMULQDQ__" : 1 00:13:15.902 Fetching value of define "__znver1__" : (undefined) 00:13:15.902 Fetching value of define "__znver2__" : (undefined) 00:13:15.902 Fetching value of define "__znver3__" : (undefined) 00:13:15.902 Fetching value of define "__znver4__" : (undefined) 00:13:15.902 Compiler for C supports arguments -Wno-format-truncation: YES 00:13:15.902 Message: lib/log: Defining dependency "log" 00:13:15.902 Message: lib/kvargs: Defining dependency "kvargs" 00:13:15.902 Message: lib/telemetry: Defining dependency "telemetry" 00:13:15.902 Checking for function "getentropy" : NO 00:13:15.902 Message: lib/eal: Defining dependency "eal" 00:13:15.902 Message: lib/ring: Defining dependency "ring" 00:13:15.902 Message: lib/rcu: Defining dependency "rcu" 00:13:15.902 Message: lib/mempool: Defining dependency "mempool" 00:13:15.902 Message: lib/mbuf: Defining dependency "mbuf" 00:13:15.902 Fetching value of define "__PCLMUL__" : 1 (cached) 00:13:15.902 Fetching value of define "__AVX512F__" : 1 (cached) 00:13:15.902 Fetching value of define "__AVX512BW__" : 1 (cached) 00:13:15.902 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:13:15.902 Fetching value of define "__AVX512VL__" : 1 (cached) 00:13:15.902 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:13:15.902 Compiler for C supports arguments -mpclmul: YES 00:13:15.902 Compiler for C supports arguments -maes: YES 00:13:15.902 Compiler for C supports arguments -mavx512f: YES (cached) 00:13:15.902 Compiler for C supports arguments -mavx512bw: YES 00:13:15.902 Compiler for C supports arguments -mavx512dq: YES 00:13:15.902 Compiler for C supports arguments -mavx512vl: YES 00:13:15.902 Compiler for C supports arguments -mvpclmulqdq: YES 00:13:15.902 Compiler for C supports arguments -mavx2: YES 00:13:15.902 Compiler for C supports arguments -mavx: YES 00:13:15.902 Message: lib/net: Defining dependency "net" 00:13:15.902 Message: lib/meter: Defining dependency "meter" 00:13:15.902 Message: lib/ethdev: Defining dependency "ethdev" 00:13:15.902 Message: lib/pci: Defining dependency "pci" 00:13:15.902 Message: lib/cmdline: Defining dependency "cmdline" 00:13:15.902 Message: lib/hash: Defining dependency "hash" 00:13:15.902 Message: lib/timer: Defining dependency "timer" 00:13:15.902 Message: lib/compressdev: Defining dependency "compressdev" 00:13:15.902 Message: lib/cryptodev: Defining dependency "cryptodev" 00:13:15.902 Message: lib/dmadev: Defining dependency "dmadev" 00:13:15.902 Compiler for C supports arguments -Wno-cast-qual: YES 00:13:15.902 Message: lib/power: Defining dependency "power" 00:13:15.902 Message: lib/reorder: Defining dependency "reorder" 00:13:15.902 Message: lib/security: Defining dependency "security" 00:13:15.902 Has header "linux/userfaultfd.h" : YES 00:13:15.902 Has header "linux/vduse.h" : YES 00:13:15.902 Message: lib/vhost: Defining dependency "vhost" 00:13:15.902 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:13:15.902 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:13:15.902 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:13:15.902 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:13:15.902 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:13:15.902 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:13:15.902 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:13:15.902 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:13:15.902 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:13:15.902 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:13:15.902 Program doxygen found: YES (/usr/local/bin/doxygen) 00:13:15.902 Configuring doxy-api-html.conf using configuration 00:13:15.902 Configuring doxy-api-man.conf using configuration 00:13:15.903 Program mandb found: YES (/usr/bin/mandb) 00:13:15.903 Program sphinx-build found: NO 00:13:15.903 Configuring rte_build_config.h using configuration 00:13:15.903 Message: 00:13:15.903 ================= 00:13:15.903 Applications Enabled 00:13:15.903 ================= 00:13:15.903 00:13:15.903 apps: 00:13:15.903 00:13:15.903 00:13:15.903 Message: 00:13:15.903 ================= 00:13:15.903 Libraries Enabled 00:13:15.903 ================= 00:13:15.903 00:13:15.903 libs: 00:13:15.903 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:13:15.903 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:13:15.903 cryptodev, dmadev, power, reorder, security, vhost, 00:13:15.903 00:13:15.903 Message: 00:13:15.903 =============== 00:13:15.903 Drivers Enabled 00:13:15.903 =============== 00:13:15.903 00:13:15.903 common: 00:13:15.903 00:13:15.903 bus: 00:13:15.903 pci, vdev, 00:13:15.903 mempool: 00:13:15.903 ring, 00:13:15.903 dma: 00:13:15.903 00:13:15.903 net: 00:13:15.903 00:13:15.903 crypto: 00:13:15.903 00:13:15.903 compress: 00:13:15.903 00:13:15.903 vdpa: 00:13:15.903 00:13:15.903 00:13:15.903 Message: 00:13:15.903 ================= 00:13:15.903 Content Skipped 00:13:15.903 ================= 00:13:15.903 00:13:15.903 apps: 00:13:15.903 dumpcap: explicitly disabled via build config 00:13:15.903 graph: explicitly disabled via build config 00:13:15.903 pdump: explicitly disabled via build config 00:13:15.903 proc-info: explicitly disabled via build config 00:13:15.903 test-acl: explicitly disabled via build config 00:13:15.903 test-bbdev: explicitly disabled via build config 00:13:15.903 test-cmdline: explicitly disabled via build config 00:13:15.903 test-compress-perf: explicitly disabled via build config 00:13:15.903 test-crypto-perf: explicitly disabled via build config 00:13:15.903 test-dma-perf: explicitly disabled via build config 00:13:15.903 test-eventdev: explicitly disabled via build config 00:13:15.903 test-fib: explicitly disabled via build config 00:13:15.903 test-flow-perf: explicitly disabled via build config 00:13:15.903 test-gpudev: explicitly disabled via build config 00:13:15.903 test-mldev: explicitly disabled via build config 00:13:15.903 test-pipeline: explicitly disabled via build config 00:13:15.903 test-pmd: explicitly disabled via build config 00:13:15.903 test-regex: explicitly disabled via build config 00:13:15.903 test-sad: explicitly disabled via build config 00:13:15.903 test-security-perf: explicitly disabled via build config 00:13:15.903 00:13:15.903 libs: 00:13:15.903 argparse: explicitly disabled via build config 00:13:15.903 metrics: explicitly disabled via build config 00:13:15.903 acl: explicitly disabled via build config 00:13:15.903 bbdev: explicitly disabled via build config 00:13:15.903 bitratestats: explicitly disabled via build config 00:13:15.903 bpf: explicitly disabled via build config 00:13:15.903 cfgfile: explicitly disabled via build config 00:13:15.903 distributor: explicitly disabled via build config 00:13:15.903 efd: explicitly disabled via build config 00:13:15.903 eventdev: explicitly disabled via build config 00:13:15.903 dispatcher: explicitly disabled via build config 00:13:15.903 gpudev: explicitly disabled via build config 00:13:15.903 gro: explicitly disabled via build config 00:13:15.903 gso: explicitly disabled via build config 00:13:15.903 ip_frag: explicitly disabled via build config 00:13:15.903 jobstats: explicitly disabled via build config 00:13:15.903 latencystats: explicitly disabled via build config 00:13:15.903 lpm: explicitly disabled via build config 00:13:15.903 member: explicitly disabled via build config 00:13:15.903 pcapng: explicitly disabled via build config 00:13:15.903 rawdev: explicitly disabled via build config 00:13:15.903 regexdev: explicitly disabled via build config 00:13:15.903 mldev: explicitly disabled via build config 00:13:15.903 rib: explicitly disabled via build config 00:13:15.903 sched: explicitly disabled via build config 00:13:15.903 stack: explicitly disabled via build config 00:13:15.903 ipsec: explicitly disabled via build config 00:13:15.903 pdcp: explicitly disabled via build config 00:13:15.903 fib: explicitly disabled via build config 00:13:15.903 port: explicitly disabled via build config 00:13:15.903 pdump: explicitly disabled via build config 00:13:15.903 table: explicitly disabled via build config 00:13:15.903 pipeline: explicitly disabled via build config 00:13:15.903 graph: explicitly disabled via build config 00:13:15.903 node: explicitly disabled via build config 00:13:15.903 00:13:15.903 drivers: 00:13:15.903 common/cpt: not in enabled drivers build config 00:13:15.903 common/dpaax: not in enabled drivers build config 00:13:15.903 common/iavf: not in enabled drivers build config 00:13:15.903 common/idpf: not in enabled drivers build config 00:13:15.903 common/ionic: not in enabled drivers build config 00:13:15.903 common/mvep: not in enabled drivers build config 00:13:15.903 common/octeontx: not in enabled drivers build config 00:13:15.903 bus/auxiliary: not in enabled drivers build config 00:13:15.903 bus/cdx: not in enabled drivers build config 00:13:15.903 bus/dpaa: not in enabled drivers build config 00:13:15.903 bus/fslmc: not in enabled drivers build config 00:13:15.903 bus/ifpga: not in enabled drivers build config 00:13:15.903 bus/platform: not in enabled drivers build config 00:13:15.903 bus/uacce: not in enabled drivers build config 00:13:15.903 bus/vmbus: not in enabled drivers build config 00:13:15.903 common/cnxk: not in enabled drivers build config 00:13:15.903 common/mlx5: not in enabled drivers build config 00:13:15.903 common/nfp: not in enabled drivers build config 00:13:15.903 common/nitrox: not in enabled drivers build config 00:13:15.903 common/qat: not in enabled drivers build config 00:13:15.903 common/sfc_efx: not in enabled drivers build config 00:13:15.903 mempool/bucket: not in enabled drivers build config 00:13:15.903 mempool/cnxk: not in enabled drivers build config 00:13:15.903 mempool/dpaa: not in enabled drivers build config 00:13:15.903 mempool/dpaa2: not in enabled drivers build config 00:13:15.903 mempool/octeontx: not in enabled drivers build config 00:13:15.903 mempool/stack: not in enabled drivers build config 00:13:15.903 dma/cnxk: not in enabled drivers build config 00:13:15.903 dma/dpaa: not in enabled drivers build config 00:13:15.903 dma/dpaa2: not in enabled drivers build config 00:13:15.903 dma/hisilicon: not in enabled drivers build config 00:13:15.903 dma/idxd: not in enabled drivers build config 00:13:15.903 dma/ioat: not in enabled drivers build config 00:13:15.903 dma/skeleton: not in enabled drivers build config 00:13:15.903 net/af_packet: not in enabled drivers build config 00:13:15.903 net/af_xdp: not in enabled drivers build config 00:13:15.903 net/ark: not in enabled drivers build config 00:13:15.903 net/atlantic: not in enabled drivers build config 00:13:15.903 net/avp: not in enabled drivers build config 00:13:15.903 net/axgbe: not in enabled drivers build config 00:13:15.903 net/bnx2x: not in enabled drivers build config 00:13:15.903 net/bnxt: not in enabled drivers build config 00:13:15.903 net/bonding: not in enabled drivers build config 00:13:15.903 net/cnxk: not in enabled drivers build config 00:13:15.903 net/cpfl: not in enabled drivers build config 00:13:15.903 net/cxgbe: not in enabled drivers build config 00:13:15.903 net/dpaa: not in enabled drivers build config 00:13:15.903 net/dpaa2: not in enabled drivers build config 00:13:15.903 net/e1000: not in enabled drivers build config 00:13:15.903 net/ena: not in enabled drivers build config 00:13:15.903 net/enetc: not in enabled drivers build config 00:13:15.903 net/enetfec: not in enabled drivers build config 00:13:15.903 net/enic: not in enabled drivers build config 00:13:15.903 net/failsafe: not in enabled drivers build config 00:13:15.903 net/fm10k: not in enabled drivers build config 00:13:15.903 net/gve: not in enabled drivers build config 00:13:15.903 net/hinic: not in enabled drivers build config 00:13:15.903 net/hns3: not in enabled drivers build config 00:13:15.903 net/i40e: not in enabled drivers build config 00:13:15.903 net/iavf: not in enabled drivers build config 00:13:15.903 net/ice: not in enabled drivers build config 00:13:15.903 net/idpf: not in enabled drivers build config 00:13:15.903 net/igc: not in enabled drivers build config 00:13:15.903 net/ionic: not in enabled drivers build config 00:13:15.903 net/ipn3ke: not in enabled drivers build config 00:13:15.903 net/ixgbe: not in enabled drivers build config 00:13:15.903 net/mana: not in enabled drivers build config 00:13:15.903 net/memif: not in enabled drivers build config 00:13:15.903 net/mlx4: not in enabled drivers build config 00:13:15.903 net/mlx5: not in enabled drivers build config 00:13:15.903 net/mvneta: not in enabled drivers build config 00:13:15.903 net/mvpp2: not in enabled drivers build config 00:13:15.903 net/netvsc: not in enabled drivers build config 00:13:15.903 net/nfb: not in enabled drivers build config 00:13:15.903 net/nfp: not in enabled drivers build config 00:13:15.903 net/ngbe: not in enabled drivers build config 00:13:15.903 net/null: not in enabled drivers build config 00:13:15.903 net/octeontx: not in enabled drivers build config 00:13:15.903 net/octeon_ep: not in enabled drivers build config 00:13:15.903 net/pcap: not in enabled drivers build config 00:13:15.903 net/pfe: not in enabled drivers build config 00:13:15.903 net/qede: not in enabled drivers build config 00:13:15.903 net/ring: not in enabled drivers build config 00:13:15.903 net/sfc: not in enabled drivers build config 00:13:15.903 net/softnic: not in enabled drivers build config 00:13:15.903 net/tap: not in enabled drivers build config 00:13:15.903 net/thunderx: not in enabled drivers build config 00:13:15.903 net/txgbe: not in enabled drivers build config 00:13:15.903 net/vdev_netvsc: not in enabled drivers build config 00:13:15.903 net/vhost: not in enabled drivers build config 00:13:15.903 net/virtio: not in enabled drivers build config 00:13:15.903 net/vmxnet3: not in enabled drivers build config 00:13:15.903 raw/*: missing internal dependency, "rawdev" 00:13:15.903 crypto/armv8: not in enabled drivers build config 00:13:15.903 crypto/bcmfs: not in enabled drivers build config 00:13:15.903 crypto/caam_jr: not in enabled drivers build config 00:13:15.903 crypto/ccp: not in enabled drivers build config 00:13:15.903 crypto/cnxk: not in enabled drivers build config 00:13:15.903 crypto/dpaa_sec: not in enabled drivers build config 00:13:15.904 crypto/dpaa2_sec: not in enabled drivers build config 00:13:15.904 crypto/ipsec_mb: not in enabled drivers build config 00:13:15.904 crypto/mlx5: not in enabled drivers build config 00:13:15.904 crypto/mvsam: not in enabled drivers build config 00:13:15.904 crypto/nitrox: not in enabled drivers build config 00:13:15.904 crypto/null: not in enabled drivers build config 00:13:15.904 crypto/octeontx: not in enabled drivers build config 00:13:15.904 crypto/openssl: not in enabled drivers build config 00:13:15.904 crypto/scheduler: not in enabled drivers build config 00:13:15.904 crypto/uadk: not in enabled drivers build config 00:13:15.904 crypto/virtio: not in enabled drivers build config 00:13:15.904 compress/isal: not in enabled drivers build config 00:13:15.904 compress/mlx5: not in enabled drivers build config 00:13:15.904 compress/nitrox: not in enabled drivers build config 00:13:15.904 compress/octeontx: not in enabled drivers build config 00:13:15.904 compress/zlib: not in enabled drivers build config 00:13:15.904 regex/*: missing internal dependency, "regexdev" 00:13:15.904 ml/*: missing internal dependency, "mldev" 00:13:15.904 vdpa/ifc: not in enabled drivers build config 00:13:15.904 vdpa/mlx5: not in enabled drivers build config 00:13:15.904 vdpa/nfp: not in enabled drivers build config 00:13:15.904 vdpa/sfc: not in enabled drivers build config 00:13:15.904 event/*: missing internal dependency, "eventdev" 00:13:15.904 baseband/*: missing internal dependency, "bbdev" 00:13:15.904 gpu/*: missing internal dependency, "gpudev" 00:13:15.904 00:13:15.904 00:13:15.904 Build targets in project: 84 00:13:15.904 00:13:15.904 DPDK 24.03.0 00:13:15.904 00:13:15.904 User defined options 00:13:15.904 buildtype : debug 00:13:15.904 default_library : shared 00:13:15.904 libdir : lib 00:13:15.904 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:13:15.904 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:13:15.904 c_link_args : 00:13:15.904 cpu_instruction_set: native 00:13:15.904 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:13:15.904 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:13:15.904 enable_docs : false 00:13:15.904 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:13:15.904 enable_kmods : false 00:13:15.904 max_lcores : 128 00:13:15.904 tests : false 00:13:15.904 00:13:15.904 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:13:15.904 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:13:15.904 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:13:15.904 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:13:15.904 [3/267] Linking static target lib/librte_kvargs.a 00:13:15.904 [4/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:13:15.904 [5/267] Linking static target lib/librte_log.a 00:13:15.904 [6/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:13:15.904 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:13:15.904 [8/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:13:15.904 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:13:15.904 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:13:15.904 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:13:15.904 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:13:15.904 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:13:15.904 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:13:15.904 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:13:15.904 [16/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:13:15.904 [17/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:13:15.904 [18/267] Linking static target lib/librte_telemetry.a 00:13:16.166 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:13:16.166 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:13:16.166 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:13:16.424 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:13:16.424 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:13:16.424 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:13:16.424 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:13:16.424 [26/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:13:16.424 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:13:16.424 [28/267] Linking target lib/librte_log.so.24.1 00:13:16.424 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:13:16.686 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:13:16.948 [31/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:13:16.948 [32/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:13:16.948 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:13:16.948 [34/267] Linking target lib/librte_kvargs.so.24.1 00:13:16.948 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:13:16.948 [36/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:13:16.948 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:13:16.948 [38/267] Linking target lib/librte_telemetry.so.24.1 00:13:16.948 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:13:16.948 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:13:17.209 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:13:17.209 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:13:17.209 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:13:17.209 [44/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:13:17.209 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:13:17.209 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:13:17.209 [47/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:13:17.470 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:13:17.470 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:13:17.470 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:13:17.731 [51/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:13:17.731 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:13:17.731 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:13:17.731 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:13:17.731 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:13:18.051 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:13:18.051 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:13:18.051 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:13:18.051 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:13:18.051 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:13:18.051 [61/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:13:18.314 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:13:18.314 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:13:18.314 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:13:18.314 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:13:18.314 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:13:18.575 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:13:18.575 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:13:18.575 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:13:18.836 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:13:18.836 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:13:18.836 [72/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:13:18.836 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:13:18.836 [74/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:13:18.836 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:13:18.836 [76/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:13:18.836 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:13:19.098 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:13:19.098 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:13:19.098 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:13:19.098 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:13:19.098 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:13:19.098 [83/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:13:19.359 [84/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:13:19.359 [85/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:13:19.359 [86/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:13:19.620 [87/267] Linking static target lib/librte_eal.a 00:13:19.620 [88/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:13:19.620 [89/267] Linking static target lib/librte_ring.a 00:13:19.620 [90/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:13:19.620 [91/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:13:19.620 [92/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:13:19.620 [93/267] Linking static target lib/librte_mempool.a 00:13:19.881 [94/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:13:19.881 [95/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:13:19.881 [96/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:13:19.881 [97/267] Linking static target lib/librte_rcu.a 00:13:20.142 [98/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:13:20.142 [99/267] Linking static target lib/librte_mbuf.a 00:13:20.142 [100/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:13:20.142 [101/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:13:20.142 [102/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:13:20.142 [103/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:13:20.142 [104/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:13:20.142 [105/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:13:20.402 [106/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:13:20.665 [107/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:13:20.665 [108/267] Linking static target lib/librte_net.a 00:13:20.665 [109/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:13:20.665 [110/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:13:20.665 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:13:20.665 [112/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:13:20.927 [113/267] Linking static target lib/librte_meter.a 00:13:20.927 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:13:20.927 [115/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:13:20.927 [116/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:13:21.252 [117/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:13:21.252 [118/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:13:21.252 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:13:21.252 [120/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:13:21.513 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:13:21.774 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:13:21.774 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:13:21.774 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:13:21.774 [125/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:13:21.774 [126/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:13:21.774 [127/267] Linking static target lib/librte_pci.a 00:13:21.774 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:13:21.774 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:13:22.036 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:13:22.036 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:13:22.036 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:13:22.036 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:13:22.036 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:13:22.036 [135/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:13:22.036 [136/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:13:22.036 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:13:22.036 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:13:22.298 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:13:22.298 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:13:22.298 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:13:22.298 [142/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:13:22.298 [143/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:13:22.298 [144/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:13:22.298 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:13:22.298 [146/267] Linking static target lib/librte_ethdev.a 00:13:22.298 [147/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:13:22.560 [148/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:13:22.560 [149/267] Linking static target lib/librte_cmdline.a 00:13:22.560 [150/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:13:22.560 [151/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:13:22.821 [152/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:13:22.821 [153/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:13:22.821 [154/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:13:22.821 [155/267] Linking static target lib/librte_timer.a 00:13:22.821 [156/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:13:22.821 [157/267] Linking static target lib/librte_hash.a 00:13:23.082 [158/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:13:23.082 [159/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:13:23.082 [160/267] Linking static target lib/librte_compressdev.a 00:13:23.082 [161/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:13:23.342 [162/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:13:23.342 [163/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:13:23.342 [164/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:13:23.603 [165/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:13:23.603 [166/267] Linking static target lib/librte_dmadev.a 00:13:23.603 [167/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:13:23.603 [168/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:13:23.863 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:13:23.863 [170/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:13:23.863 [171/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:13:23.863 [172/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:13:24.124 [173/267] Linking static target lib/librte_cryptodev.a 00:13:24.124 [174/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:13:24.124 [175/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:24.124 [176/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:13:24.384 [177/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:13:24.384 [178/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:13:24.384 [179/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:13:24.384 [180/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:13:24.384 [181/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:13:24.384 [182/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:13:24.644 [183/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:24.644 [184/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:13:24.905 [185/267] Linking static target lib/librte_power.a 00:13:24.905 [186/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:13:24.905 [187/267] Linking static target lib/librte_reorder.a 00:13:24.905 [188/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:13:24.905 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:13:25.226 [190/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:13:25.226 [191/267] Linking static target lib/librte_security.a 00:13:25.226 [192/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:13:25.226 [193/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:13:25.503 [194/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:13:25.764 [195/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:13:25.764 [196/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:13:26.027 [197/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:13:26.027 [198/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:13:26.027 [199/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:13:26.288 [200/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:13:26.288 [201/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:13:26.288 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:13:26.288 [203/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:13:26.550 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:13:26.550 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:13:26.550 [206/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:13:26.550 [207/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:13:26.810 [208/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:26.810 [209/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:13:26.810 [210/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:13:26.810 [211/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:13:26.810 [212/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:13:26.810 [213/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:13:27.070 [214/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:13:27.070 [215/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:13:27.070 [216/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:13:27.331 [217/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:13:27.331 [218/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:13:27.332 [219/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:13:27.332 [220/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:13:27.332 [221/267] Linking static target drivers/librte_mempool_ring.a 00:13:27.332 [222/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:13:27.332 [223/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:13:27.332 [224/267] Linking static target drivers/librte_bus_pci.a 00:13:27.332 [225/267] Linking static target drivers/librte_bus_vdev.a 00:13:27.592 [226/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:27.853 [227/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:13:28.423 [228/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:13:28.423 [229/267] Linking static target lib/librte_vhost.a 00:13:30.331 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:13:30.331 [231/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:13:30.331 [232/267] Linking target lib/librte_eal.so.24.1 00:13:30.592 [233/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:13:30.592 [234/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:13:30.592 [235/267] Linking target lib/librte_meter.so.24.1 00:13:30.592 [236/267] Linking target lib/librte_pci.so.24.1 00:13:30.592 [237/267] Linking target lib/librte_ring.so.24.1 00:13:30.592 [238/267] Linking target lib/librte_timer.so.24.1 00:13:30.592 [239/267] Linking target drivers/librte_bus_vdev.so.24.1 00:13:30.592 [240/267] Linking target lib/librte_dmadev.so.24.1 00:13:30.853 [241/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:13:30.853 [242/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:13:30.853 [243/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:13:30.853 [244/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:13:30.853 [245/267] Linking target drivers/librte_bus_pci.so.24.1 00:13:30.853 [246/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:13:30.853 [247/267] Linking target lib/librte_mempool.so.24.1 00:13:30.853 [248/267] Linking target lib/librte_rcu.so.24.1 00:13:30.853 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:13:30.853 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:13:31.115 [251/267] Linking target lib/librte_mbuf.so.24.1 00:13:31.115 [252/267] Linking target drivers/librte_mempool_ring.so.24.1 00:13:31.115 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:13:31.115 [254/267] Linking target lib/librte_reorder.so.24.1 00:13:31.115 [255/267] Linking target lib/librte_cryptodev.so.24.1 00:13:31.115 [256/267] Linking target lib/librte_net.so.24.1 00:13:31.115 [257/267] Linking target lib/librte_compressdev.so.24.1 00:13:31.375 [258/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:13:31.375 [259/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:13:31.375 [260/267] Linking target lib/librte_security.so.24.1 00:13:31.375 [261/267] Linking target lib/librte_cmdline.so.24.1 00:13:31.375 [262/267] Linking target lib/librte_hash.so.24.1 00:13:31.375 [263/267] Linking target lib/librte_ethdev.so.24.1 00:13:31.636 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:13:31.636 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:13:31.636 [266/267] Linking target lib/librte_power.so.24.1 00:13:31.636 [267/267] Linking target lib/librte_vhost.so.24.1 00:13:31.636 INFO: autodetecting backend as ninja 00:13:31.636 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:13:58.209 CC lib/ut_mock/mock.o 00:13:58.209 CC lib/log/log.o 00:13:58.209 CC lib/log/log_deprecated.o 00:13:58.209 CC lib/log/log_flags.o 00:13:58.209 CC lib/ut/ut.o 00:13:58.209 LIB libspdk_ut_mock.a 00:13:58.209 LIB libspdk_ut.a 00:13:58.209 LIB libspdk_log.a 00:13:58.209 SO libspdk_ut.so.2.0 00:13:58.209 SO libspdk_ut_mock.so.6.0 00:13:58.209 SO libspdk_log.so.7.1 00:13:58.209 SYMLINK libspdk_ut.so 00:13:58.209 SYMLINK libspdk_ut_mock.so 00:13:58.209 SYMLINK libspdk_log.so 00:13:58.209 CC lib/dma/dma.o 00:13:58.209 CC lib/ioat/ioat.o 00:13:58.209 CC lib/util/bit_array.o 00:13:58.209 CC lib/util/base64.o 00:13:58.209 CC lib/util/cpuset.o 00:13:58.209 CC lib/util/crc32.o 00:13:58.209 CC lib/util/crc16.o 00:13:58.209 CXX lib/trace_parser/trace.o 00:13:58.209 CC lib/util/crc32c.o 00:13:58.209 CC lib/vfio_user/host/vfio_user_pci.o 00:13:58.209 CC lib/util/crc32_ieee.o 00:13:58.209 CC lib/util/crc64.o 00:13:58.209 CC lib/util/dif.o 00:13:58.209 LIB libspdk_dma.a 00:13:58.209 SO libspdk_dma.so.5.0 00:13:58.209 CC lib/util/fd.o 00:13:58.209 CC lib/util/fd_group.o 00:13:58.209 LIB libspdk_ioat.a 00:13:58.209 SYMLINK libspdk_dma.so 00:13:58.209 CC lib/util/file.o 00:13:58.209 CC lib/util/hexlify.o 00:13:58.209 CC lib/util/iov.o 00:13:58.209 SO libspdk_ioat.so.7.0 00:13:58.209 CC lib/util/math.o 00:13:58.210 SYMLINK libspdk_ioat.so 00:13:58.210 CC lib/util/net.o 00:13:58.210 CC lib/vfio_user/host/vfio_user.o 00:13:58.210 CC lib/util/pipe.o 00:13:58.210 CC lib/util/strerror_tls.o 00:13:58.210 CC lib/util/string.o 00:13:58.210 CC lib/util/uuid.o 00:13:58.210 CC lib/util/xor.o 00:13:58.210 CC lib/util/zipf.o 00:13:58.210 CC lib/util/md5.o 00:13:58.210 LIB libspdk_vfio_user.a 00:13:58.210 SO libspdk_vfio_user.so.5.0 00:13:58.210 SYMLINK libspdk_vfio_user.so 00:13:58.210 LIB libspdk_util.a 00:13:58.210 SO libspdk_util.so.10.1 00:13:58.210 LIB libspdk_trace_parser.a 00:13:58.210 SO libspdk_trace_parser.so.6.0 00:13:58.210 SYMLINK libspdk_util.so 00:13:58.210 SYMLINK libspdk_trace_parser.so 00:13:58.210 CC lib/env_dpdk/env.o 00:13:58.210 CC lib/env_dpdk/memory.o 00:13:58.210 CC lib/env_dpdk/init.o 00:13:58.210 CC lib/env_dpdk/pci.o 00:13:58.210 CC lib/env_dpdk/threads.o 00:13:58.210 CC lib/vmd/vmd.o 00:13:58.210 CC lib/idxd/idxd.o 00:13:58.210 CC lib/rdma_utils/rdma_utils.o 00:13:58.210 CC lib/json/json_parse.o 00:13:58.210 CC lib/conf/conf.o 00:13:58.210 CC lib/env_dpdk/pci_ioat.o 00:13:58.210 LIB libspdk_rdma_utils.a 00:13:58.210 LIB libspdk_conf.a 00:13:58.210 CC lib/json/json_util.o 00:13:58.210 SO libspdk_rdma_utils.so.1.0 00:13:58.210 SO libspdk_conf.so.6.0 00:13:58.210 CC lib/env_dpdk/pci_virtio.o 00:13:58.210 SYMLINK libspdk_rdma_utils.so 00:13:58.210 CC lib/idxd/idxd_user.o 00:13:58.210 CC lib/env_dpdk/pci_vmd.o 00:13:58.210 SYMLINK libspdk_conf.so 00:13:58.210 CC lib/idxd/idxd_kernel.o 00:13:58.210 CC lib/json/json_write.o 00:13:58.210 CC lib/env_dpdk/pci_idxd.o 00:13:58.210 CC lib/env_dpdk/pci_event.o 00:13:58.210 CC lib/vmd/led.o 00:13:58.210 CC lib/env_dpdk/sigbus_handler.o 00:13:58.210 CC lib/rdma_provider/common.o 00:13:58.210 LIB libspdk_idxd.a 00:13:58.210 CC lib/env_dpdk/pci_dpdk.o 00:13:58.210 CC lib/env_dpdk/pci_dpdk_2207.o 00:13:58.210 SO libspdk_idxd.so.12.1 00:13:58.210 CC lib/rdma_provider/rdma_provider_verbs.o 00:13:58.210 LIB libspdk_vmd.a 00:13:58.210 CC lib/env_dpdk/pci_dpdk_2211.o 00:13:58.210 SYMLINK libspdk_idxd.so 00:13:58.210 SO libspdk_vmd.so.6.0 00:13:58.210 LIB libspdk_json.a 00:13:58.210 SYMLINK libspdk_vmd.so 00:13:58.210 SO libspdk_json.so.6.0 00:13:58.210 SYMLINK libspdk_json.so 00:13:58.210 LIB libspdk_rdma_provider.a 00:13:58.210 SO libspdk_rdma_provider.so.7.0 00:13:58.210 SYMLINK libspdk_rdma_provider.so 00:13:58.210 CC lib/jsonrpc/jsonrpc_server.o 00:13:58.210 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:13:58.210 CC lib/jsonrpc/jsonrpc_client.o 00:13:58.210 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:13:58.210 LIB libspdk_jsonrpc.a 00:13:58.210 LIB libspdk_env_dpdk.a 00:13:58.210 SO libspdk_jsonrpc.so.6.0 00:13:58.210 SO libspdk_env_dpdk.so.15.1 00:13:58.210 SYMLINK libspdk_jsonrpc.so 00:13:58.210 SYMLINK libspdk_env_dpdk.so 00:13:58.210 CC lib/rpc/rpc.o 00:13:58.210 LIB libspdk_rpc.a 00:13:58.210 SO libspdk_rpc.so.6.0 00:13:58.210 SYMLINK libspdk_rpc.so 00:13:58.210 CC lib/keyring/keyring.o 00:13:58.210 CC lib/keyring/keyring_rpc.o 00:13:58.210 CC lib/notify/notify_rpc.o 00:13:58.210 CC lib/notify/notify.o 00:13:58.210 CC lib/trace/trace.o 00:13:58.210 CC lib/trace/trace_flags.o 00:13:58.210 CC lib/trace/trace_rpc.o 00:13:58.210 LIB libspdk_notify.a 00:13:58.210 SO libspdk_notify.so.6.0 00:13:58.210 LIB libspdk_keyring.a 00:13:58.210 LIB libspdk_trace.a 00:13:58.210 SYMLINK libspdk_notify.so 00:13:58.210 SO libspdk_keyring.so.2.0 00:13:58.210 SO libspdk_trace.so.11.0 00:13:58.210 SYMLINK libspdk_keyring.so 00:13:58.210 SYMLINK libspdk_trace.so 00:13:58.210 CC lib/thread/thread.o 00:13:58.210 CC lib/thread/iobuf.o 00:13:58.210 CC lib/sock/sock.o 00:13:58.210 CC lib/sock/sock_rpc.o 00:13:58.210 LIB libspdk_sock.a 00:13:58.210 SO libspdk_sock.so.10.0 00:13:58.210 SYMLINK libspdk_sock.so 00:13:58.471 CC lib/nvme/nvme_ctrlr.o 00:13:58.471 CC lib/nvme/nvme_ctrlr_cmd.o 00:13:58.471 CC lib/nvme/nvme_ns_cmd.o 00:13:58.471 CC lib/nvme/nvme_fabric.o 00:13:58.471 CC lib/nvme/nvme_ns.o 00:13:58.471 CC lib/nvme/nvme_pcie.o 00:13:58.471 CC lib/nvme/nvme_pcie_common.o 00:13:58.471 CC lib/nvme/nvme.o 00:13:58.471 CC lib/nvme/nvme_qpair.o 00:13:59.042 CC lib/nvme/nvme_quirks.o 00:13:59.042 CC lib/nvme/nvme_transport.o 00:13:59.042 CC lib/nvme/nvme_discovery.o 00:13:59.042 LIB libspdk_thread.a 00:13:59.042 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:13:59.042 SO libspdk_thread.so.11.0 00:13:59.042 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:13:59.042 SYMLINK libspdk_thread.so 00:13:59.042 CC lib/nvme/nvme_tcp.o 00:13:59.042 CC lib/nvme/nvme_opal.o 00:13:59.303 CC lib/nvme/nvme_io_msg.o 00:13:59.303 CC lib/nvme/nvme_poll_group.o 00:13:59.303 CC lib/nvme/nvme_zns.o 00:13:59.563 CC lib/nvme/nvme_stubs.o 00:13:59.563 CC lib/nvme/nvme_auth.o 00:13:59.563 CC lib/nvme/nvme_cuse.o 00:13:59.563 CC lib/nvme/nvme_rdma.o 00:13:59.823 CC lib/accel/accel.o 00:13:59.823 CC lib/accel/accel_rpc.o 00:13:59.823 CC lib/blob/blobstore.o 00:13:59.823 CC lib/blob/request.o 00:14:00.084 CC lib/blob/zeroes.o 00:14:00.084 CC lib/blob/blob_bs_dev.o 00:14:00.084 CC lib/accel/accel_sw.o 00:14:00.344 CC lib/init/json_config.o 00:14:00.344 CC lib/init/subsystem.o 00:14:00.344 CC lib/init/subsystem_rpc.o 00:14:00.344 CC lib/virtio/virtio.o 00:14:00.344 CC lib/fsdev/fsdev.o 00:14:00.344 CC lib/fsdev/fsdev_io.o 00:14:00.344 CC lib/fsdev/fsdev_rpc.o 00:14:00.603 CC lib/init/rpc.o 00:14:00.603 CC lib/virtio/virtio_vhost_user.o 00:14:00.603 CC lib/virtio/virtio_vfio_user.o 00:14:00.603 LIB libspdk_accel.a 00:14:00.603 SO libspdk_accel.so.16.0 00:14:00.603 CC lib/virtio/virtio_pci.o 00:14:00.603 SYMLINK libspdk_accel.so 00:14:00.603 LIB libspdk_init.a 00:14:00.603 SO libspdk_init.so.6.0 00:14:00.603 LIB libspdk_nvme.a 00:14:00.603 SYMLINK libspdk_init.so 00:14:00.864 CC lib/bdev/bdev.o 00:14:00.864 CC lib/bdev/bdev_rpc.o 00:14:00.864 CC lib/bdev/bdev_zone.o 00:14:00.864 CC lib/bdev/part.o 00:14:00.864 CC lib/bdev/scsi_nvme.o 00:14:00.864 SO libspdk_nvme.so.15.0 00:14:00.864 LIB libspdk_virtio.a 00:14:00.864 SO libspdk_virtio.so.7.0 00:14:00.864 CC lib/event/app.o 00:14:00.864 LIB libspdk_fsdev.a 00:14:00.864 SO libspdk_fsdev.so.2.0 00:14:00.864 SYMLINK libspdk_virtio.so 00:14:00.864 CC lib/event/reactor.o 00:14:00.864 SYMLINK libspdk_nvme.so 00:14:00.864 CC lib/event/log_rpc.o 00:14:00.864 CC lib/event/app_rpc.o 00:14:01.124 SYMLINK libspdk_fsdev.so 00:14:01.124 CC lib/event/scheduler_static.o 00:14:01.124 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:14:01.385 LIB libspdk_event.a 00:14:01.385 SO libspdk_event.so.14.0 00:14:01.385 SYMLINK libspdk_event.so 00:14:01.647 LIB libspdk_fuse_dispatcher.a 00:14:01.647 SO libspdk_fuse_dispatcher.so.1.0 00:14:01.647 SYMLINK libspdk_fuse_dispatcher.so 00:14:01.909 LIB libspdk_blob.a 00:14:01.909 SO libspdk_blob.so.11.0 00:14:01.909 SYMLINK libspdk_blob.so 00:14:02.168 CC lib/lvol/lvol.o 00:14:02.168 CC lib/blobfs/tree.o 00:14:02.168 CC lib/blobfs/blobfs.o 00:14:02.739 LIB libspdk_lvol.a 00:14:02.739 SO libspdk_lvol.so.10.0 00:14:02.739 LIB libspdk_blobfs.a 00:14:03.002 LIB libspdk_bdev.a 00:14:03.002 SYMLINK libspdk_lvol.so 00:14:03.002 SO libspdk_blobfs.so.10.0 00:14:03.002 SO libspdk_bdev.so.17.0 00:14:03.002 SYMLINK libspdk_blobfs.so 00:14:03.002 SYMLINK libspdk_bdev.so 00:14:03.262 CC lib/nvmf/ctrlr.o 00:14:03.262 CC lib/nbd/nbd.o 00:14:03.263 CC lib/nvmf/ctrlr_bdev.o 00:14:03.263 CC lib/nvmf/nvmf.o 00:14:03.263 CC lib/nvmf/subsystem.o 00:14:03.263 CC lib/nvmf/ctrlr_discovery.o 00:14:03.263 CC lib/ublk/ublk.o 00:14:03.263 CC lib/nbd/nbd_rpc.o 00:14:03.263 CC lib/scsi/dev.o 00:14:03.263 CC lib/ftl/ftl_core.o 00:14:03.263 CC lib/scsi/lun.o 00:14:03.539 CC lib/ublk/ublk_rpc.o 00:14:03.539 CC lib/ftl/ftl_init.o 00:14:03.539 LIB libspdk_nbd.a 00:14:03.539 CC lib/scsi/port.o 00:14:03.539 SO libspdk_nbd.so.7.0 00:14:03.539 CC lib/ftl/ftl_layout.o 00:14:03.539 CC lib/ftl/ftl_debug.o 00:14:03.539 SYMLINK libspdk_nbd.so 00:14:03.539 CC lib/ftl/ftl_io.o 00:14:03.539 CC lib/scsi/scsi.o 00:14:03.799 CC lib/ftl/ftl_sb.o 00:14:03.799 LIB libspdk_ublk.a 00:14:03.799 SO libspdk_ublk.so.3.0 00:14:03.799 CC lib/ftl/ftl_l2p.o 00:14:03.799 CC lib/scsi/scsi_bdev.o 00:14:03.799 SYMLINK libspdk_ublk.so 00:14:03.799 CC lib/ftl/ftl_l2p_flat.o 00:14:03.799 CC lib/ftl/ftl_nv_cache.o 00:14:03.799 CC lib/ftl/ftl_band.o 00:14:03.799 CC lib/scsi/scsi_pr.o 00:14:03.799 CC lib/scsi/scsi_rpc.o 00:14:03.799 CC lib/scsi/task.o 00:14:03.799 CC lib/ftl/ftl_band_ops.o 00:14:04.060 CC lib/ftl/ftl_writer.o 00:14:04.060 CC lib/nvmf/nvmf_rpc.o 00:14:04.060 CC lib/nvmf/transport.o 00:14:04.060 CC lib/nvmf/tcp.o 00:14:04.060 CC lib/ftl/ftl_rq.o 00:14:04.060 CC lib/ftl/ftl_reloc.o 00:14:04.060 LIB libspdk_scsi.a 00:14:04.060 CC lib/ftl/ftl_l2p_cache.o 00:14:04.319 SO libspdk_scsi.so.9.0 00:14:04.319 CC lib/nvmf/stubs.o 00:14:04.319 CC lib/ftl/ftl_p2l.o 00:14:04.319 SYMLINK libspdk_scsi.so 00:14:04.319 CC lib/nvmf/mdns_server.o 00:14:04.319 CC lib/ftl/ftl_p2l_log.o 00:14:04.580 CC lib/nvmf/rdma.o 00:14:04.580 CC lib/nvmf/auth.o 00:14:04.580 CC lib/ftl/mngt/ftl_mngt.o 00:14:04.580 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:14:04.580 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:14:04.580 CC lib/ftl/mngt/ftl_mngt_startup.o 00:14:04.580 CC lib/ftl/mngt/ftl_mngt_md.o 00:14:04.580 CC lib/ftl/mngt/ftl_mngt_misc.o 00:14:04.580 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:14:04.841 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:14:04.841 CC lib/ftl/mngt/ftl_mngt_band.o 00:14:04.841 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:14:04.841 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:14:04.841 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:14:04.842 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:14:05.103 CC lib/ftl/utils/ftl_conf.o 00:14:05.103 CC lib/iscsi/conn.o 00:14:05.103 CC lib/iscsi/init_grp.o 00:14:05.103 CC lib/iscsi/iscsi.o 00:14:05.103 CC lib/iscsi/param.o 00:14:05.103 CC lib/vhost/vhost.o 00:14:05.103 CC lib/vhost/vhost_rpc.o 00:14:05.103 CC lib/vhost/vhost_scsi.o 00:14:05.103 CC lib/iscsi/portal_grp.o 00:14:05.103 CC lib/iscsi/tgt_node.o 00:14:05.103 CC lib/ftl/utils/ftl_md.o 00:14:05.363 CC lib/iscsi/iscsi_subsystem.o 00:14:05.363 CC lib/iscsi/iscsi_rpc.o 00:14:05.363 CC lib/iscsi/task.o 00:14:05.624 CC lib/ftl/utils/ftl_mempool.o 00:14:05.624 CC lib/ftl/utils/ftl_bitmap.o 00:14:05.624 CC lib/vhost/vhost_blk.o 00:14:05.624 CC lib/vhost/rte_vhost_user.o 00:14:05.624 CC lib/ftl/utils/ftl_property.o 00:14:05.624 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:14:05.624 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:14:05.885 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:14:05.885 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:14:05.885 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:14:05.885 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:14:05.885 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:14:05.885 CC lib/ftl/upgrade/ftl_sb_v3.o 00:14:05.885 CC lib/ftl/upgrade/ftl_sb_v5.o 00:14:05.885 CC lib/ftl/nvc/ftl_nvc_dev.o 00:14:05.885 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:14:06.146 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:14:06.146 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:14:06.146 CC lib/ftl/base/ftl_base_dev.o 00:14:06.146 CC lib/ftl/base/ftl_base_bdev.o 00:14:06.146 CC lib/ftl/ftl_trace.o 00:14:06.146 LIB libspdk_iscsi.a 00:14:06.146 LIB libspdk_nvmf.a 00:14:06.146 SO libspdk_iscsi.so.8.0 00:14:06.407 SO libspdk_nvmf.so.20.0 00:14:06.407 SYMLINK libspdk_iscsi.so 00:14:06.407 LIB libspdk_ftl.a 00:14:06.407 SYMLINK libspdk_nvmf.so 00:14:06.407 LIB libspdk_vhost.a 00:14:06.407 SO libspdk_ftl.so.9.0 00:14:06.669 SO libspdk_vhost.so.8.0 00:14:06.669 SYMLINK libspdk_vhost.so 00:14:06.669 SYMLINK libspdk_ftl.so 00:14:06.935 CC module/env_dpdk/env_dpdk_rpc.o 00:14:06.935 CC module/accel/error/accel_error.o 00:14:06.935 CC module/scheduler/dynamic/scheduler_dynamic.o 00:14:06.935 CC module/fsdev/aio/fsdev_aio.o 00:14:06.935 CC module/scheduler/gscheduler/gscheduler.o 00:14:06.935 CC module/accel/ioat/accel_ioat.o 00:14:06.935 CC module/blob/bdev/blob_bdev.o 00:14:06.935 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:14:07.225 CC module/keyring/file/keyring.o 00:14:07.225 CC module/sock/posix/posix.o 00:14:07.225 LIB libspdk_env_dpdk_rpc.a 00:14:07.225 SO libspdk_env_dpdk_rpc.so.6.0 00:14:07.225 SYMLINK libspdk_env_dpdk_rpc.so 00:14:07.225 CC module/fsdev/aio/fsdev_aio_rpc.o 00:14:07.225 LIB libspdk_scheduler_gscheduler.a 00:14:07.225 CC module/keyring/file/keyring_rpc.o 00:14:07.225 LIB libspdk_scheduler_dpdk_governor.a 00:14:07.225 SO libspdk_scheduler_gscheduler.so.4.0 00:14:07.225 SO libspdk_scheduler_dpdk_governor.so.4.0 00:14:07.225 CC module/accel/ioat/accel_ioat_rpc.o 00:14:07.225 CC module/accel/error/accel_error_rpc.o 00:14:07.225 SYMLINK libspdk_scheduler_gscheduler.so 00:14:07.225 CC module/fsdev/aio/linux_aio_mgr.o 00:14:07.225 SYMLINK libspdk_scheduler_dpdk_governor.so 00:14:07.225 LIB libspdk_blob_bdev.a 00:14:07.225 SO libspdk_blob_bdev.so.11.0 00:14:07.225 LIB libspdk_scheduler_dynamic.a 00:14:07.225 LIB libspdk_keyring_file.a 00:14:07.225 SO libspdk_scheduler_dynamic.so.4.0 00:14:07.225 SO libspdk_keyring_file.so.2.0 00:14:07.225 LIB libspdk_accel_ioat.a 00:14:07.225 SYMLINK libspdk_blob_bdev.so 00:14:07.486 LIB libspdk_accel_error.a 00:14:07.486 SYMLINK libspdk_scheduler_dynamic.so 00:14:07.486 SO libspdk_accel_error.so.2.0 00:14:07.486 SO libspdk_accel_ioat.so.6.0 00:14:07.486 SYMLINK libspdk_keyring_file.so 00:14:07.486 CC module/keyring/linux/keyring.o 00:14:07.486 SYMLINK libspdk_accel_ioat.so 00:14:07.486 CC module/keyring/linux/keyring_rpc.o 00:14:07.486 SYMLINK libspdk_accel_error.so 00:14:07.486 CC module/accel/dsa/accel_dsa.o 00:14:07.486 CC module/accel/iaa/accel_iaa.o 00:14:07.486 CC module/accel/dsa/accel_dsa_rpc.o 00:14:07.486 LIB libspdk_keyring_linux.a 00:14:07.486 LIB libspdk_fsdev_aio.a 00:14:07.486 SO libspdk_keyring_linux.so.1.0 00:14:07.747 SO libspdk_fsdev_aio.so.1.0 00:14:07.747 CC module/bdev/gpt/gpt.o 00:14:07.747 CC module/bdev/delay/vbdev_delay.o 00:14:07.747 CC module/bdev/error/vbdev_error.o 00:14:07.747 SYMLINK libspdk_keyring_linux.so 00:14:07.747 CC module/bdev/delay/vbdev_delay_rpc.o 00:14:07.747 CC module/blobfs/bdev/blobfs_bdev.o 00:14:07.747 LIB libspdk_sock_posix.a 00:14:07.747 SYMLINK libspdk_fsdev_aio.so 00:14:07.747 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:14:07.747 CC module/bdev/error/vbdev_error_rpc.o 00:14:07.747 CC module/accel/iaa/accel_iaa_rpc.o 00:14:07.747 SO libspdk_sock_posix.so.6.0 00:14:07.747 LIB libspdk_accel_dsa.a 00:14:07.747 SO libspdk_accel_dsa.so.5.0 00:14:07.747 CC module/bdev/gpt/vbdev_gpt.o 00:14:07.747 SYMLINK libspdk_sock_posix.so 00:14:07.747 LIB libspdk_accel_iaa.a 00:14:07.747 SYMLINK libspdk_accel_dsa.so 00:14:07.747 LIB libspdk_blobfs_bdev.a 00:14:07.747 SO libspdk_accel_iaa.so.3.0 00:14:07.747 SO libspdk_blobfs_bdev.so.6.0 00:14:07.747 LIB libspdk_bdev_error.a 00:14:08.008 SO libspdk_bdev_error.so.6.0 00:14:08.008 SYMLINK libspdk_accel_iaa.so 00:14:08.008 SYMLINK libspdk_blobfs_bdev.so 00:14:08.008 SYMLINK libspdk_bdev_error.so 00:14:08.008 LIB libspdk_bdev_delay.a 00:14:08.008 CC module/bdev/null/bdev_null.o 00:14:08.008 CC module/bdev/malloc/bdev_malloc.o 00:14:08.008 CC module/bdev/lvol/vbdev_lvol.o 00:14:08.008 SO libspdk_bdev_delay.so.6.0 00:14:08.008 CC module/bdev/nvme/bdev_nvme.o 00:14:08.008 CC module/bdev/passthru/vbdev_passthru.o 00:14:08.008 LIB libspdk_bdev_gpt.a 00:14:08.008 SYMLINK libspdk_bdev_delay.so 00:14:08.008 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:14:08.008 SO libspdk_bdev_gpt.so.6.0 00:14:08.008 CC module/bdev/raid/bdev_raid.o 00:14:08.008 CC module/bdev/split/vbdev_split.o 00:14:08.008 SYMLINK libspdk_bdev_gpt.so 00:14:08.008 CC module/bdev/raid/bdev_raid_rpc.o 00:14:08.008 CC module/bdev/zone_block/vbdev_zone_block.o 00:14:08.269 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:14:08.269 CC module/bdev/null/bdev_null_rpc.o 00:14:08.269 CC module/bdev/malloc/bdev_malloc_rpc.o 00:14:08.269 LIB libspdk_bdev_passthru.a 00:14:08.269 CC module/bdev/split/vbdev_split_rpc.o 00:14:08.269 SO libspdk_bdev_passthru.so.6.0 00:14:08.269 CC module/bdev/raid/bdev_raid_sb.o 00:14:08.269 LIB libspdk_bdev_null.a 00:14:08.269 SYMLINK libspdk_bdev_passthru.so 00:14:08.269 CC module/bdev/raid/raid0.o 00:14:08.269 SO libspdk_bdev_null.so.6.0 00:14:08.269 CC module/bdev/nvme/bdev_nvme_rpc.o 00:14:08.530 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:14:08.530 LIB libspdk_bdev_malloc.a 00:14:08.530 LIB libspdk_bdev_split.a 00:14:08.530 SYMLINK libspdk_bdev_null.so 00:14:08.530 SO libspdk_bdev_split.so.6.0 00:14:08.530 SO libspdk_bdev_malloc.so.6.0 00:14:08.530 LIB libspdk_bdev_lvol.a 00:14:08.530 SYMLINK libspdk_bdev_malloc.so 00:14:08.530 SYMLINK libspdk_bdev_split.so 00:14:08.530 SO libspdk_bdev_lvol.so.6.0 00:14:08.530 LIB libspdk_bdev_zone_block.a 00:14:08.530 CC module/bdev/raid/raid1.o 00:14:08.530 SYMLINK libspdk_bdev_lvol.so 00:14:08.530 CC module/bdev/raid/concat.o 00:14:08.530 CC module/bdev/aio/bdev_aio.o 00:14:08.530 SO libspdk_bdev_zone_block.so.6.0 00:14:08.530 SYMLINK libspdk_bdev_zone_block.so 00:14:08.530 CC module/bdev/nvme/nvme_rpc.o 00:14:08.530 CC module/bdev/ftl/bdev_ftl.o 00:14:08.791 CC module/bdev/iscsi/bdev_iscsi.o 00:14:08.791 CC module/bdev/virtio/bdev_virtio_scsi.o 00:14:08.791 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:14:08.791 CC module/bdev/ftl/bdev_ftl_rpc.o 00:14:08.791 CC module/bdev/nvme/bdev_mdns_client.o 00:14:08.791 CC module/bdev/aio/bdev_aio_rpc.o 00:14:08.791 CC module/bdev/nvme/vbdev_opal.o 00:14:08.791 LIB libspdk_bdev_raid.a 00:14:09.053 CC module/bdev/nvme/vbdev_opal_rpc.o 00:14:09.053 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:14:09.053 SO libspdk_bdev_raid.so.6.0 00:14:09.053 LIB libspdk_bdev_ftl.a 00:14:09.053 LIB libspdk_bdev_aio.a 00:14:09.053 SYMLINK libspdk_bdev_raid.so 00:14:09.053 LIB libspdk_bdev_iscsi.a 00:14:09.053 SO libspdk_bdev_ftl.so.6.0 00:14:09.053 CC module/bdev/virtio/bdev_virtio_blk.o 00:14:09.053 SO libspdk_bdev_iscsi.so.6.0 00:14:09.053 SO libspdk_bdev_aio.so.6.0 00:14:09.053 SYMLINK libspdk_bdev_ftl.so 00:14:09.053 CC module/bdev/virtio/bdev_virtio_rpc.o 00:14:09.053 SYMLINK libspdk_bdev_iscsi.so 00:14:09.053 SYMLINK libspdk_bdev_aio.so 00:14:09.314 LIB libspdk_bdev_virtio.a 00:14:09.314 SO libspdk_bdev_virtio.so.6.0 00:14:09.314 SYMLINK libspdk_bdev_virtio.so 00:14:10.256 LIB libspdk_bdev_nvme.a 00:14:10.256 SO libspdk_bdev_nvme.so.7.1 00:14:10.256 SYMLINK libspdk_bdev_nvme.so 00:14:10.826 CC module/event/subsystems/scheduler/scheduler.o 00:14:10.826 CC module/event/subsystems/keyring/keyring.o 00:14:10.826 CC module/event/subsystems/sock/sock.o 00:14:10.826 CC module/event/subsystems/iobuf/iobuf.o 00:14:10.826 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:14:10.826 CC module/event/subsystems/fsdev/fsdev.o 00:14:10.826 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:14:10.826 CC module/event/subsystems/vmd/vmd.o 00:14:10.826 CC module/event/subsystems/vmd/vmd_rpc.o 00:14:10.826 LIB libspdk_event_keyring.a 00:14:10.826 LIB libspdk_event_scheduler.a 00:14:10.826 LIB libspdk_event_fsdev.a 00:14:10.826 LIB libspdk_event_sock.a 00:14:10.826 LIB libspdk_event_vmd.a 00:14:10.826 LIB libspdk_event_vhost_blk.a 00:14:10.826 LIB libspdk_event_iobuf.a 00:14:10.826 SO libspdk_event_scheduler.so.4.0 00:14:10.826 SO libspdk_event_keyring.so.1.0 00:14:10.826 SO libspdk_event_fsdev.so.1.0 00:14:10.826 SO libspdk_event_sock.so.5.0 00:14:10.826 SO libspdk_event_vhost_blk.so.3.0 00:14:10.826 SO libspdk_event_vmd.so.6.0 00:14:10.826 SO libspdk_event_iobuf.so.3.0 00:14:10.826 SYMLINK libspdk_event_keyring.so 00:14:10.826 SYMLINK libspdk_event_sock.so 00:14:10.826 SYMLINK libspdk_event_scheduler.so 00:14:10.826 SYMLINK libspdk_event_fsdev.so 00:14:10.826 SYMLINK libspdk_event_vhost_blk.so 00:14:10.826 SYMLINK libspdk_event_vmd.so 00:14:10.826 SYMLINK libspdk_event_iobuf.so 00:14:11.089 CC module/event/subsystems/accel/accel.o 00:14:11.350 LIB libspdk_event_accel.a 00:14:11.350 SO libspdk_event_accel.so.6.0 00:14:11.350 SYMLINK libspdk_event_accel.so 00:14:11.611 CC module/event/subsystems/bdev/bdev.o 00:14:11.871 LIB libspdk_event_bdev.a 00:14:11.871 SO libspdk_event_bdev.so.6.0 00:14:11.871 SYMLINK libspdk_event_bdev.so 00:14:11.871 CC module/event/subsystems/scsi/scsi.o 00:14:11.871 CC module/event/subsystems/ublk/ublk.o 00:14:12.131 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:14:12.131 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:14:12.131 CC module/event/subsystems/nbd/nbd.o 00:14:12.131 LIB libspdk_event_ublk.a 00:14:12.131 LIB libspdk_event_scsi.a 00:14:12.131 LIB libspdk_event_nbd.a 00:14:12.131 SO libspdk_event_ublk.so.3.0 00:14:12.131 SO libspdk_event_scsi.so.6.0 00:14:12.131 SO libspdk_event_nbd.so.6.0 00:14:12.131 SYMLINK libspdk_event_ublk.so 00:14:12.131 SYMLINK libspdk_event_scsi.so 00:14:12.131 LIB libspdk_event_nvmf.a 00:14:12.131 SYMLINK libspdk_event_nbd.so 00:14:12.131 SO libspdk_event_nvmf.so.6.0 00:14:12.392 SYMLINK libspdk_event_nvmf.so 00:14:12.392 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:14:12.392 CC module/event/subsystems/iscsi/iscsi.o 00:14:12.652 LIB libspdk_event_vhost_scsi.a 00:14:12.652 SO libspdk_event_vhost_scsi.so.3.0 00:14:12.652 LIB libspdk_event_iscsi.a 00:14:12.652 SO libspdk_event_iscsi.so.6.0 00:14:12.652 SYMLINK libspdk_event_vhost_scsi.so 00:14:12.652 SYMLINK libspdk_event_iscsi.so 00:14:12.652 SO libspdk.so.6.0 00:14:12.652 SYMLINK libspdk.so 00:14:12.914 TEST_HEADER include/spdk/accel.h 00:14:12.914 TEST_HEADER include/spdk/accel_module.h 00:14:12.914 CXX app/trace/trace.o 00:14:12.914 TEST_HEADER include/spdk/assert.h 00:14:12.914 TEST_HEADER include/spdk/barrier.h 00:14:12.914 TEST_HEADER include/spdk/base64.h 00:14:12.914 CC app/trace_record/trace_record.o 00:14:12.914 TEST_HEADER include/spdk/bdev.h 00:14:12.914 TEST_HEADER include/spdk/bdev_module.h 00:14:12.914 TEST_HEADER include/spdk/bdev_zone.h 00:14:12.914 TEST_HEADER include/spdk/bit_array.h 00:14:12.914 TEST_HEADER include/spdk/bit_pool.h 00:14:12.914 TEST_HEADER include/spdk/blob_bdev.h 00:14:12.914 TEST_HEADER include/spdk/blobfs_bdev.h 00:14:12.914 TEST_HEADER include/spdk/blobfs.h 00:14:12.914 TEST_HEADER include/spdk/blob.h 00:14:12.914 TEST_HEADER include/spdk/conf.h 00:14:12.914 TEST_HEADER include/spdk/config.h 00:14:12.914 TEST_HEADER include/spdk/cpuset.h 00:14:12.914 TEST_HEADER include/spdk/crc16.h 00:14:12.914 TEST_HEADER include/spdk/crc32.h 00:14:12.914 TEST_HEADER include/spdk/crc64.h 00:14:12.914 TEST_HEADER include/spdk/dif.h 00:14:12.914 TEST_HEADER include/spdk/dma.h 00:14:12.914 TEST_HEADER include/spdk/endian.h 00:14:12.914 TEST_HEADER include/spdk/env_dpdk.h 00:14:12.914 TEST_HEADER include/spdk/env.h 00:14:12.914 TEST_HEADER include/spdk/event.h 00:14:12.914 TEST_HEADER include/spdk/fd_group.h 00:14:12.914 CC app/nvmf_tgt/nvmf_main.o 00:14:12.914 TEST_HEADER include/spdk/fd.h 00:14:12.914 TEST_HEADER include/spdk/file.h 00:14:12.914 TEST_HEADER include/spdk/fsdev.h 00:14:12.914 TEST_HEADER include/spdk/fsdev_module.h 00:14:12.914 TEST_HEADER include/spdk/ftl.h 00:14:12.914 TEST_HEADER include/spdk/fuse_dispatcher.h 00:14:12.914 TEST_HEADER include/spdk/gpt_spec.h 00:14:12.914 CC test/thread/poller_perf/poller_perf.o 00:14:12.914 TEST_HEADER include/spdk/hexlify.h 00:14:12.914 TEST_HEADER include/spdk/histogram_data.h 00:14:12.914 TEST_HEADER include/spdk/idxd.h 00:14:12.914 TEST_HEADER include/spdk/idxd_spec.h 00:14:12.914 TEST_HEADER include/spdk/init.h 00:14:12.914 TEST_HEADER include/spdk/ioat.h 00:14:12.914 TEST_HEADER include/spdk/ioat_spec.h 00:14:12.914 CC examples/util/zipf/zipf.o 00:14:12.914 CC examples/ioat/perf/perf.o 00:14:12.914 TEST_HEADER include/spdk/iscsi_spec.h 00:14:12.914 TEST_HEADER include/spdk/json.h 00:14:12.914 TEST_HEADER include/spdk/jsonrpc.h 00:14:12.914 TEST_HEADER include/spdk/keyring.h 00:14:12.914 TEST_HEADER include/spdk/keyring_module.h 00:14:12.914 TEST_HEADER include/spdk/likely.h 00:14:12.914 TEST_HEADER include/spdk/log.h 00:14:12.914 TEST_HEADER include/spdk/lvol.h 00:14:12.914 TEST_HEADER include/spdk/md5.h 00:14:13.175 TEST_HEADER include/spdk/memory.h 00:14:13.176 TEST_HEADER include/spdk/mmio.h 00:14:13.176 TEST_HEADER include/spdk/nbd.h 00:14:13.176 TEST_HEADER include/spdk/net.h 00:14:13.176 TEST_HEADER include/spdk/notify.h 00:14:13.176 TEST_HEADER include/spdk/nvme.h 00:14:13.176 TEST_HEADER include/spdk/nvme_intel.h 00:14:13.176 TEST_HEADER include/spdk/nvme_ocssd.h 00:14:13.176 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:14:13.176 TEST_HEADER include/spdk/nvme_spec.h 00:14:13.176 TEST_HEADER include/spdk/nvme_zns.h 00:14:13.176 CC test/dma/test_dma/test_dma.o 00:14:13.176 TEST_HEADER include/spdk/nvmf_cmd.h 00:14:13.176 CC test/app/bdev_svc/bdev_svc.o 00:14:13.176 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:14:13.176 TEST_HEADER include/spdk/nvmf.h 00:14:13.176 TEST_HEADER include/spdk/nvmf_spec.h 00:14:13.176 TEST_HEADER include/spdk/nvmf_transport.h 00:14:13.176 TEST_HEADER include/spdk/opal.h 00:14:13.176 TEST_HEADER include/spdk/opal_spec.h 00:14:13.176 TEST_HEADER include/spdk/pci_ids.h 00:14:13.176 TEST_HEADER include/spdk/pipe.h 00:14:13.176 TEST_HEADER include/spdk/queue.h 00:14:13.176 TEST_HEADER include/spdk/reduce.h 00:14:13.176 TEST_HEADER include/spdk/rpc.h 00:14:13.176 TEST_HEADER include/spdk/scheduler.h 00:14:13.176 TEST_HEADER include/spdk/scsi.h 00:14:13.176 TEST_HEADER include/spdk/scsi_spec.h 00:14:13.176 TEST_HEADER include/spdk/sock.h 00:14:13.176 TEST_HEADER include/spdk/stdinc.h 00:14:13.176 TEST_HEADER include/spdk/string.h 00:14:13.176 CC test/env/mem_callbacks/mem_callbacks.o 00:14:13.176 TEST_HEADER include/spdk/thread.h 00:14:13.176 TEST_HEADER include/spdk/trace.h 00:14:13.176 TEST_HEADER include/spdk/trace_parser.h 00:14:13.176 TEST_HEADER include/spdk/tree.h 00:14:13.176 TEST_HEADER include/spdk/ublk.h 00:14:13.176 TEST_HEADER include/spdk/util.h 00:14:13.176 TEST_HEADER include/spdk/uuid.h 00:14:13.176 TEST_HEADER include/spdk/version.h 00:14:13.176 TEST_HEADER include/spdk/vfio_user_pci.h 00:14:13.176 TEST_HEADER include/spdk/vfio_user_spec.h 00:14:13.176 TEST_HEADER include/spdk/vhost.h 00:14:13.176 TEST_HEADER include/spdk/vmd.h 00:14:13.176 TEST_HEADER include/spdk/xor.h 00:14:13.176 TEST_HEADER include/spdk/zipf.h 00:14:13.176 LINK nvmf_tgt 00:14:13.176 CXX test/cpp_headers/accel.o 00:14:13.176 LINK poller_perf 00:14:13.176 LINK zipf 00:14:13.176 LINK spdk_trace_record 00:14:13.176 LINK ioat_perf 00:14:13.176 LINK bdev_svc 00:14:13.176 LINK spdk_trace 00:14:13.439 CXX test/cpp_headers/accel_module.o 00:14:13.439 CC test/env/vtophys/vtophys.o 00:14:13.439 CC examples/ioat/verify/verify.o 00:14:13.439 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:14:13.439 CXX test/cpp_headers/assert.o 00:14:13.439 CC test/env/memory/memory_ut.o 00:14:13.439 LINK vtophys 00:14:13.439 LINK mem_callbacks 00:14:13.439 LINK test_dma 00:14:13.439 CC app/iscsi_tgt/iscsi_tgt.o 00:14:13.439 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:14:13.785 LINK env_dpdk_post_init 00:14:13.785 LINK verify 00:14:13.785 CXX test/cpp_headers/barrier.o 00:14:13.785 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:14:13.785 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:14:13.785 CXX test/cpp_headers/base64.o 00:14:13.785 LINK iscsi_tgt 00:14:13.785 CC test/app/histogram_perf/histogram_perf.o 00:14:13.785 CC test/app/jsoncat/jsoncat.o 00:14:13.785 CC test/env/pci/pci_ut.o 00:14:13.785 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:14:13.785 CXX test/cpp_headers/bdev.o 00:14:13.785 LINK jsoncat 00:14:13.785 LINK histogram_perf 00:14:14.047 LINK nvme_fuzz 00:14:14.047 LINK pci_ut 00:14:14.047 CXX test/cpp_headers/bdev_module.o 00:14:14.047 CC app/spdk_tgt/spdk_tgt.o 00:14:14.047 CC test/app/stub/stub.o 00:14:14.047 CC test/rpc_client/rpc_client_test.o 00:14:14.047 LINK vhost_fuzz 00:14:14.308 CXX test/cpp_headers/bdev_zone.o 00:14:14.308 LINK spdk_tgt 00:14:14.308 LINK stub 00:14:14.309 CC test/accel/dif/dif.o 00:14:14.309 LINK rpc_client_test 00:14:14.309 CXX test/cpp_headers/bit_array.o 00:14:14.569 CC test/event/event_perf/event_perf.o 00:14:14.569 LINK memory_ut 00:14:14.569 CC test/blobfs/mkfs/mkfs.o 00:14:14.569 CC app/spdk_lspci/spdk_lspci.o 00:14:14.569 CC examples/interrupt_tgt/interrupt_tgt.o 00:14:14.569 CXX test/cpp_headers/bit_pool.o 00:14:14.569 LINK event_perf 00:14:14.569 LINK spdk_lspci 00:14:14.569 LINK mkfs 00:14:14.830 CC test/lvol/esnap/esnap.o 00:14:14.830 CXX test/cpp_headers/blob_bdev.o 00:14:14.830 LINK interrupt_tgt 00:14:14.830 LINK dif 00:14:14.830 CC test/event/reactor/reactor.o 00:14:14.830 CC test/nvme/aer/aer.o 00:14:14.830 CC app/spdk_nvme_perf/perf.o 00:14:14.830 CXX test/cpp_headers/blobfs_bdev.o 00:14:14.830 CC test/nvme/reset/reset.o 00:14:14.830 LINK reactor 00:14:15.090 CXX test/cpp_headers/blobfs.o 00:14:15.090 CC examples/thread/thread/thread_ex.o 00:14:15.090 CC test/nvme/sgl/sgl.o 00:14:15.090 LINK iscsi_fuzz 00:14:15.090 LINK aer 00:14:15.090 CC test/event/reactor_perf/reactor_perf.o 00:14:15.090 LINK reset 00:14:15.090 CXX test/cpp_headers/blob.o 00:14:15.350 LINK thread 00:14:15.350 LINK reactor_perf 00:14:15.350 LINK sgl 00:14:15.350 CC test/nvme/e2edp/nvme_dp.o 00:14:15.350 CXX test/cpp_headers/conf.o 00:14:15.350 CC test/nvme/overhead/overhead.o 00:14:15.608 CC test/bdev/bdevio/bdevio.o 00:14:15.608 CXX test/cpp_headers/config.o 00:14:15.608 LINK spdk_nvme_perf 00:14:15.608 CXX test/cpp_headers/cpuset.o 00:14:15.608 CC test/event/app_repeat/app_repeat.o 00:14:15.608 LINK nvme_dp 00:14:15.608 CC test/nvme/err_injection/err_injection.o 00:14:15.608 LINK overhead 00:14:15.868 CC examples/sock/hello_world/hello_sock.o 00:14:15.868 CXX test/cpp_headers/crc16.o 00:14:15.868 LINK app_repeat 00:14:15.868 CXX test/cpp_headers/crc32.o 00:14:15.868 CC app/spdk_nvme_identify/identify.o 00:14:15.868 LINK err_injection 00:14:15.868 LINK bdevio 00:14:15.868 CXX test/cpp_headers/crc64.o 00:14:15.868 CC test/nvme/startup/startup.o 00:14:15.868 LINK hello_sock 00:14:16.129 CC test/nvme/reserve/reserve.o 00:14:16.129 CC test/event/scheduler/scheduler.o 00:14:16.129 CC test/nvme/simple_copy/simple_copy.o 00:14:16.129 CXX test/cpp_headers/dif.o 00:14:16.129 LINK startup 00:14:16.129 CC test/nvme/connect_stress/connect_stress.o 00:14:16.129 CC test/nvme/boot_partition/boot_partition.o 00:14:16.129 CXX test/cpp_headers/dma.o 00:14:16.389 LINK reserve 00:14:16.389 LINK scheduler 00:14:16.389 CXX test/cpp_headers/endian.o 00:14:16.389 LINK simple_copy 00:14:16.389 LINK boot_partition 00:14:16.389 LINK connect_stress 00:14:16.389 CXX test/cpp_headers/env_dpdk.o 00:14:16.389 CXX test/cpp_headers/env.o 00:14:16.389 CXX test/cpp_headers/event.o 00:14:16.389 CXX test/cpp_headers/fd_group.o 00:14:16.649 CXX test/cpp_headers/fd.o 00:14:16.649 LINK spdk_nvme_identify 00:14:16.649 CC test/nvme/compliance/nvme_compliance.o 00:14:16.649 CC test/nvme/fused_ordering/fused_ordering.o 00:14:16.649 CXX test/cpp_headers/file.o 00:14:16.909 CC test/nvme/doorbell_aers/doorbell_aers.o 00:14:16.909 CXX test/cpp_headers/fsdev.o 00:14:16.909 CC test/nvme/cuse/cuse.o 00:14:16.909 CC test/nvme/fdp/fdp.o 00:14:16.909 CC app/spdk_nvme_discover/discovery_aer.o 00:14:16.909 LINK fused_ordering 00:14:16.909 CXX test/cpp_headers/fsdev_module.o 00:14:16.909 CXX test/cpp_headers/ftl.o 00:14:16.909 LINK nvme_compliance 00:14:17.169 LINK doorbell_aers 00:14:17.169 CXX test/cpp_headers/fuse_dispatcher.o 00:14:17.169 CXX test/cpp_headers/gpt_spec.o 00:14:17.169 LINK spdk_nvme_discover 00:14:17.169 CXX test/cpp_headers/hexlify.o 00:14:17.169 CXX test/cpp_headers/histogram_data.o 00:14:17.169 CXX test/cpp_headers/idxd.o 00:14:17.445 LINK fdp 00:14:17.445 CXX test/cpp_headers/idxd_spec.o 00:14:17.445 CXX test/cpp_headers/init.o 00:14:17.445 CXX test/cpp_headers/ioat.o 00:14:17.445 CXX test/cpp_headers/ioat_spec.o 00:14:17.445 CXX test/cpp_headers/iscsi_spec.o 00:14:17.445 CC app/spdk_top/spdk_top.o 00:14:17.445 CXX test/cpp_headers/json.o 00:14:17.445 CXX test/cpp_headers/jsonrpc.o 00:14:17.445 CXX test/cpp_headers/keyring.o 00:14:17.725 CXX test/cpp_headers/keyring_module.o 00:14:17.725 CXX test/cpp_headers/likely.o 00:14:17.725 CXX test/cpp_headers/log.o 00:14:17.725 CXX test/cpp_headers/lvol.o 00:14:17.725 CC app/vhost/vhost.o 00:14:17.725 CXX test/cpp_headers/md5.o 00:14:17.987 CC examples/vmd/lsvmd/lsvmd.o 00:14:17.987 CC app/spdk_dd/spdk_dd.o 00:14:17.987 LINK vhost 00:14:17.987 CC app/fio/nvme/fio_plugin.o 00:14:17.987 CXX test/cpp_headers/memory.o 00:14:17.987 CC examples/idxd/perf/perf.o 00:14:17.987 LINK lsvmd 00:14:18.247 LINK cuse 00:14:18.247 CXX test/cpp_headers/mmio.o 00:14:18.247 CXX test/cpp_headers/nbd.o 00:14:18.247 CC examples/vmd/led/led.o 00:14:18.247 LINK spdk_dd 00:14:18.247 LINK spdk_top 00:14:18.247 LINK idxd_perf 00:14:18.247 CXX test/cpp_headers/net.o 00:14:18.508 LINK led 00:14:18.508 CC examples/fsdev/hello_world/hello_fsdev.o 00:14:18.508 CC examples/accel/perf/accel_perf.o 00:14:18.508 CXX test/cpp_headers/notify.o 00:14:18.508 LINK spdk_nvme 00:14:18.508 CXX test/cpp_headers/nvme.o 00:14:18.508 CC app/fio/bdev/fio_plugin.o 00:14:18.508 CC examples/nvme/hello_world/hello_world.o 00:14:18.767 CC examples/blob/hello_world/hello_blob.o 00:14:18.767 CXX test/cpp_headers/nvme_intel.o 00:14:18.767 LINK hello_fsdev 00:14:18.767 CC examples/blob/cli/blobcli.o 00:14:18.767 CC examples/nvme/reconnect/reconnect.o 00:14:18.767 LINK hello_blob 00:14:18.767 LINK hello_world 00:14:19.028 LINK accel_perf 00:14:19.028 CXX test/cpp_headers/nvme_ocssd.o 00:14:19.028 CC examples/nvme/nvme_manage/nvme_manage.o 00:14:19.028 LINK spdk_bdev 00:14:19.028 CXX test/cpp_headers/nvme_ocssd_spec.o 00:14:19.028 LINK reconnect 00:14:19.028 CXX test/cpp_headers/nvme_spec.o 00:14:19.028 CC examples/nvme/arbitration/arbitration.o 00:14:19.288 LINK blobcli 00:14:19.288 CXX test/cpp_headers/nvme_zns.o 00:14:19.288 CC examples/bdev/hello_world/hello_bdev.o 00:14:19.288 CC examples/bdev/bdevperf/bdevperf.o 00:14:19.288 CC examples/nvme/hotplug/hotplug.o 00:14:19.288 CC examples/nvme/cmb_copy/cmb_copy.o 00:14:19.288 CC examples/nvme/abort/abort.o 00:14:19.549 LINK arbitration 00:14:19.549 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:14:19.549 CXX test/cpp_headers/nvmf_cmd.o 00:14:19.549 LINK nvme_manage 00:14:19.549 LINK cmb_copy 00:14:19.549 LINK hello_bdev 00:14:19.549 LINK esnap 00:14:19.549 LINK hotplug 00:14:19.549 CXX test/cpp_headers/nvmf_fc_spec.o 00:14:19.549 CXX test/cpp_headers/nvmf.o 00:14:19.549 LINK pmr_persistence 00:14:19.549 CXX test/cpp_headers/nvmf_spec.o 00:14:19.549 CXX test/cpp_headers/nvmf_transport.o 00:14:19.809 CXX test/cpp_headers/opal.o 00:14:19.809 CXX test/cpp_headers/opal_spec.o 00:14:19.809 LINK abort 00:14:19.809 CXX test/cpp_headers/pci_ids.o 00:14:19.809 CXX test/cpp_headers/pipe.o 00:14:19.809 CXX test/cpp_headers/queue.o 00:14:19.809 CXX test/cpp_headers/reduce.o 00:14:19.809 CXX test/cpp_headers/rpc.o 00:14:19.809 CXX test/cpp_headers/scheduler.o 00:14:19.809 CXX test/cpp_headers/scsi.o 00:14:19.809 CXX test/cpp_headers/scsi_spec.o 00:14:19.809 CXX test/cpp_headers/sock.o 00:14:19.809 CXX test/cpp_headers/stdinc.o 00:14:19.809 CXX test/cpp_headers/string.o 00:14:20.070 CXX test/cpp_headers/thread.o 00:14:20.070 CXX test/cpp_headers/trace.o 00:14:20.070 CXX test/cpp_headers/trace_parser.o 00:14:20.070 CXX test/cpp_headers/tree.o 00:14:20.070 CXX test/cpp_headers/ublk.o 00:14:20.070 LINK bdevperf 00:14:20.070 CXX test/cpp_headers/util.o 00:14:20.070 CXX test/cpp_headers/uuid.o 00:14:20.070 CXX test/cpp_headers/version.o 00:14:20.070 CXX test/cpp_headers/vfio_user_pci.o 00:14:20.070 CXX test/cpp_headers/vfio_user_spec.o 00:14:20.070 CXX test/cpp_headers/vhost.o 00:14:20.070 CXX test/cpp_headers/vmd.o 00:14:20.070 CXX test/cpp_headers/xor.o 00:14:20.070 CXX test/cpp_headers/zipf.o 00:14:20.644 CC examples/nvmf/nvmf/nvmf.o 00:14:20.905 LINK nvmf 00:14:21.180 00:14:21.180 real 1m20.360s 00:14:21.180 user 7m24.163s 00:14:21.180 sys 1m37.004s 00:14:21.180 ************************************ 00:14:21.180 END TEST make 00:14:21.180 ************************************ 00:14:21.180 12:13:16 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:14:21.180 12:13:16 make -- common/autotest_common.sh@10 -- $ set +x 00:14:21.180 12:13:16 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:14:21.180 12:13:16 -- pm/common@29 -- $ signal_monitor_resources TERM 00:14:21.180 12:13:16 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:14:21.180 12:13:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:14:21.180 12:13:16 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:14:21.180 12:13:16 -- pm/common@44 -- $ pid=5029 00:14:21.180 12:13:16 -- pm/common@50 -- $ kill -TERM 5029 00:14:21.180 12:13:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:14:21.180 12:13:16 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:14:21.180 12:13:16 -- pm/common@44 -- $ pid=5031 00:14:21.180 12:13:16 -- pm/common@50 -- $ kill -TERM 5031 00:14:21.180 12:13:16 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:14:21.180 12:13:16 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:14:21.180 12:13:16 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:21.180 12:13:16 -- common/autotest_common.sh@1693 -- # lcov --version 00:14:21.180 12:13:16 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:21.180 12:13:17 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:21.180 12:13:17 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:21.180 12:13:17 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:21.180 12:13:17 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:21.180 12:13:17 -- scripts/common.sh@336 -- # IFS=.-: 00:14:21.180 12:13:17 -- scripts/common.sh@336 -- # read -ra ver1 00:14:21.180 12:13:17 -- scripts/common.sh@337 -- # IFS=.-: 00:14:21.180 12:13:17 -- scripts/common.sh@337 -- # read -ra ver2 00:14:21.180 12:13:17 -- scripts/common.sh@338 -- # local 'op=<' 00:14:21.180 12:13:17 -- scripts/common.sh@340 -- # ver1_l=2 00:14:21.180 12:13:17 -- scripts/common.sh@341 -- # ver2_l=1 00:14:21.180 12:13:17 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:21.180 12:13:17 -- scripts/common.sh@344 -- # case "$op" in 00:14:21.180 12:13:17 -- scripts/common.sh@345 -- # : 1 00:14:21.180 12:13:17 -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:21.180 12:13:17 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:21.180 12:13:17 -- scripts/common.sh@365 -- # decimal 1 00:14:21.180 12:13:17 -- scripts/common.sh@353 -- # local d=1 00:14:21.180 12:13:17 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:21.180 12:13:17 -- scripts/common.sh@355 -- # echo 1 00:14:21.180 12:13:17 -- scripts/common.sh@365 -- # ver1[v]=1 00:14:21.180 12:13:17 -- scripts/common.sh@366 -- # decimal 2 00:14:21.180 12:13:17 -- scripts/common.sh@353 -- # local d=2 00:14:21.180 12:13:17 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:21.180 12:13:17 -- scripts/common.sh@355 -- # echo 2 00:14:21.180 12:13:17 -- scripts/common.sh@366 -- # ver2[v]=2 00:14:21.180 12:13:17 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:21.180 12:13:17 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:21.180 12:13:17 -- scripts/common.sh@368 -- # return 0 00:14:21.180 12:13:17 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:21.180 12:13:17 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:21.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.180 --rc genhtml_branch_coverage=1 00:14:21.180 --rc genhtml_function_coverage=1 00:14:21.180 --rc genhtml_legend=1 00:14:21.180 --rc geninfo_all_blocks=1 00:14:21.180 --rc geninfo_unexecuted_blocks=1 00:14:21.180 00:14:21.180 ' 00:14:21.180 12:13:17 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:21.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.180 --rc genhtml_branch_coverage=1 00:14:21.180 --rc genhtml_function_coverage=1 00:14:21.180 --rc genhtml_legend=1 00:14:21.180 --rc geninfo_all_blocks=1 00:14:21.180 --rc geninfo_unexecuted_blocks=1 00:14:21.180 00:14:21.180 ' 00:14:21.180 12:13:17 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:21.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.180 --rc genhtml_branch_coverage=1 00:14:21.180 --rc genhtml_function_coverage=1 00:14:21.180 --rc genhtml_legend=1 00:14:21.180 --rc geninfo_all_blocks=1 00:14:21.180 --rc geninfo_unexecuted_blocks=1 00:14:21.180 00:14:21.180 ' 00:14:21.180 12:13:17 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:21.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.180 --rc genhtml_branch_coverage=1 00:14:21.180 --rc genhtml_function_coverage=1 00:14:21.180 --rc genhtml_legend=1 00:14:21.180 --rc geninfo_all_blocks=1 00:14:21.180 --rc geninfo_unexecuted_blocks=1 00:14:21.180 00:14:21.180 ' 00:14:21.180 12:13:17 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:21.180 12:13:17 -- nvmf/common.sh@7 -- # uname -s 00:14:21.180 12:13:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:21.180 12:13:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:21.180 12:13:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:21.180 12:13:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:21.180 12:13:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:21.181 12:13:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:21.181 12:13:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:21.181 12:13:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:21.181 12:13:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:21.181 12:13:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:21.500 12:13:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:14:21.500 12:13:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:14:21.500 12:13:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:21.500 12:13:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:21.500 12:13:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:21.500 12:13:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:21.500 12:13:17 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:21.500 12:13:17 -- scripts/common.sh@15 -- # shopt -s extglob 00:14:21.500 12:13:17 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:21.500 12:13:17 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:21.500 12:13:17 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:21.500 12:13:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.500 12:13:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.500 12:13:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.500 12:13:17 -- paths/export.sh@5 -- # export PATH 00:14:21.500 12:13:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.500 12:13:17 -- nvmf/common.sh@51 -- # : 0 00:14:21.500 12:13:17 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:21.500 12:13:17 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:21.500 12:13:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:21.500 12:13:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:21.500 12:13:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:21.500 12:13:17 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:21.500 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:21.500 12:13:17 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:21.500 12:13:17 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:21.500 12:13:17 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:21.500 12:13:17 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:14:21.500 12:13:17 -- spdk/autotest.sh@32 -- # uname -s 00:14:21.500 12:13:17 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:14:21.500 12:13:17 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:14:21.500 12:13:17 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:14:21.500 12:13:17 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:14:21.500 12:13:17 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:14:21.500 12:13:17 -- spdk/autotest.sh@44 -- # modprobe nbd 00:14:21.500 12:13:17 -- spdk/autotest.sh@46 -- # type -P udevadm 00:14:21.500 12:13:17 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:14:21.500 12:13:17 -- spdk/autotest.sh@48 -- # udevadm_pid=55717 00:14:21.500 12:13:17 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:14:21.500 12:13:17 -- pm/common@17 -- # local monitor 00:14:21.500 12:13:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:14:21.500 12:13:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:14:21.500 12:13:17 -- pm/common@25 -- # sleep 1 00:14:21.500 12:13:17 -- pm/common@21 -- # date +%s 00:14:21.500 12:13:17 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:14:21.500 12:13:17 -- pm/common@21 -- # date +%s 00:14:21.500 12:13:17 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732536797 00:14:21.500 12:13:17 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732536797 00:14:21.500 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732536797_collect-vmstat.pm.log 00:14:21.500 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732536797_collect-cpu-load.pm.log 00:14:22.442 12:13:18 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:14:22.442 12:13:18 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:14:22.442 12:13:18 -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:22.442 12:13:18 -- common/autotest_common.sh@10 -- # set +x 00:14:22.442 12:13:18 -- spdk/autotest.sh@59 -- # create_test_list 00:14:22.442 12:13:18 -- common/autotest_common.sh@752 -- # xtrace_disable 00:14:22.442 12:13:18 -- common/autotest_common.sh@10 -- # set +x 00:14:22.442 12:13:18 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:14:22.442 12:13:18 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:14:22.442 12:13:18 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:14:22.442 12:13:18 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:14:22.442 12:13:18 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:14:22.442 12:13:18 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:14:22.442 12:13:18 -- common/autotest_common.sh@1457 -- # uname 00:14:22.442 12:13:18 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:14:22.442 12:13:18 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:14:22.442 12:13:18 -- common/autotest_common.sh@1477 -- # uname 00:14:22.442 12:13:18 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:14:22.442 12:13:18 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:14:22.442 12:13:18 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:14:22.442 lcov: LCOV version 1.15 00:14:22.442 12:13:18 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:14:37.452 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:14:37.452 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:14:52.355 12:13:47 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:14:52.355 12:13:47 -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:52.355 12:13:47 -- common/autotest_common.sh@10 -- # set +x 00:14:52.355 12:13:47 -- spdk/autotest.sh@78 -- # rm -f 00:14:52.355 12:13:47 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:52.615 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:52.615 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:14:52.875 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:14:52.875 12:13:48 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:14:52.875 12:13:48 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:14:52.875 12:13:48 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:14:52.875 12:13:48 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:14:52.875 12:13:48 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:14:52.875 12:13:48 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:14:52.876 12:13:48 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:14:52.876 12:13:48 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:14:52.876 12:13:48 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:14:52.876 12:13:48 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:14:52.876 12:13:48 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:14:52.876 12:13:48 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:14:52.876 12:13:48 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:14:52.876 12:13:48 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:14:52.876 12:13:48 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:14:52.876 12:13:48 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:14:52.876 12:13:48 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:14:52.876 12:13:48 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:14:52.876 12:13:48 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:14:52.876 12:13:48 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:14:52.876 12:13:48 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:14:52.876 12:13:48 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:14:52.876 12:13:48 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:14:52.876 12:13:48 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:14:52.876 12:13:48 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:14:52.876 12:13:48 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:14:52.876 12:13:48 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:14:52.876 12:13:48 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:14:52.876 12:13:48 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:14:52.876 12:13:48 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:14:52.876 No valid GPT data, bailing 00:14:52.876 12:13:48 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:14:52.876 12:13:48 -- scripts/common.sh@394 -- # pt= 00:14:52.876 12:13:48 -- scripts/common.sh@395 -- # return 1 00:14:52.876 12:13:48 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:14:52.876 1+0 records in 00:14:52.876 1+0 records out 00:14:52.876 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00730722 s, 143 MB/s 00:14:52.876 12:13:48 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:14:52.876 12:13:48 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:14:52.876 12:13:48 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:14:52.876 12:13:48 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:14:52.876 12:13:48 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:14:52.876 No valid GPT data, bailing 00:14:52.876 12:13:48 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:14:52.876 12:13:48 -- scripts/common.sh@394 -- # pt= 00:14:52.876 12:13:48 -- scripts/common.sh@395 -- # return 1 00:14:52.876 12:13:48 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:14:52.876 1+0 records in 00:14:52.876 1+0 records out 00:14:52.876 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0054507 s, 192 MB/s 00:14:52.876 12:13:48 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:14:52.876 12:13:48 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:14:52.876 12:13:48 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:14:52.876 12:13:48 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:14:52.876 12:13:48 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:14:52.876 No valid GPT data, bailing 00:14:52.876 12:13:48 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:14:53.137 12:13:48 -- scripts/common.sh@394 -- # pt= 00:14:53.137 12:13:48 -- scripts/common.sh@395 -- # return 1 00:14:53.137 12:13:48 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:14:53.137 1+0 records in 00:14:53.137 1+0 records out 00:14:53.137 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00401236 s, 261 MB/s 00:14:53.137 12:13:48 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:14:53.137 12:13:48 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:14:53.137 12:13:48 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:14:53.137 12:13:48 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:14:53.137 12:13:48 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:14:53.137 No valid GPT data, bailing 00:14:53.137 12:13:48 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:14:53.137 12:13:48 -- scripts/common.sh@394 -- # pt= 00:14:53.137 12:13:48 -- scripts/common.sh@395 -- # return 1 00:14:53.137 12:13:48 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:14:53.137 1+0 records in 00:14:53.137 1+0 records out 00:14:53.138 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0051566 s, 203 MB/s 00:14:53.138 12:13:48 -- spdk/autotest.sh@105 -- # sync 00:14:53.138 12:13:48 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:14:53.138 12:13:48 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:14:53.138 12:13:48 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:14:55.068 12:13:50 -- spdk/autotest.sh@111 -- # uname -s 00:14:55.068 12:13:50 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:14:55.068 12:13:50 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:14:55.068 12:13:50 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:14:55.330 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:55.330 Hugepages 00:14:55.330 node hugesize free / total 00:14:55.330 node0 1048576kB 0 / 0 00:14:55.330 node0 2048kB 0 / 0 00:14:55.330 00:14:55.330 Type BDF Vendor Device NUMA Driver Device Block devices 00:14:55.330 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:14:55.330 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:14:55.330 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:14:55.330 12:13:51 -- spdk/autotest.sh@117 -- # uname -s 00:14:55.330 12:13:51 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:14:55.330 12:13:51 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:14:55.330 12:13:51 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:55.899 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:56.158 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:56.158 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:56.158 12:13:51 -- common/autotest_common.sh@1517 -- # sleep 1 00:14:57.095 12:13:52 -- common/autotest_common.sh@1518 -- # bdfs=() 00:14:57.095 12:13:52 -- common/autotest_common.sh@1518 -- # local bdfs 00:14:57.095 12:13:52 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:14:57.095 12:13:52 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:14:57.095 12:13:52 -- common/autotest_common.sh@1498 -- # bdfs=() 00:14:57.095 12:13:52 -- common/autotest_common.sh@1498 -- # local bdfs 00:14:57.095 12:13:52 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:57.095 12:13:52 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:14:57.095 12:13:52 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:57.431 12:13:52 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:14:57.431 12:13:52 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:14:57.431 12:13:52 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:57.431 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:57.431 Waiting for block devices as requested 00:14:57.431 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:57.693 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:57.693 12:13:53 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:14:57.693 12:13:53 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:14:57.693 12:13:53 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:14:57.693 12:13:53 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:14:57.693 12:13:53 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:14:57.693 12:13:53 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:14:57.693 12:13:53 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:14:57.693 12:13:53 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:14:57.693 12:13:53 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:14:57.693 12:13:53 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:14:57.693 12:13:53 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:14:57.693 12:13:53 -- common/autotest_common.sh@1531 -- # grep oacs 00:14:57.693 12:13:53 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:14:57.693 12:13:53 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:14:57.693 12:13:53 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:14:57.693 12:13:53 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:14:57.693 12:13:53 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:14:57.693 12:13:53 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:14:57.693 12:13:53 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:14:57.693 12:13:53 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:14:57.693 12:13:53 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:14:57.693 12:13:53 -- common/autotest_common.sh@1543 -- # continue 00:14:57.693 12:13:53 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:14:57.693 12:13:53 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:14:57.693 12:13:53 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:14:57.693 12:13:53 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:14:57.693 12:13:53 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:14:57.693 12:13:53 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:14:57.693 12:13:53 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:14:57.693 12:13:53 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:14:57.693 12:13:53 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:14:57.693 12:13:53 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:14:57.693 12:13:53 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:14:57.693 12:13:53 -- common/autotest_common.sh@1531 -- # grep oacs 00:14:57.693 12:13:53 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:14:57.693 12:13:53 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:14:57.693 12:13:53 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:14:57.693 12:13:53 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:14:57.693 12:13:53 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:14:57.693 12:13:53 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:14:57.693 12:13:53 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:14:57.693 12:13:53 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:14:57.693 12:13:53 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:14:57.693 12:13:53 -- common/autotest_common.sh@1543 -- # continue 00:14:57.693 12:13:53 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:14:57.693 12:13:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:57.693 12:13:53 -- common/autotest_common.sh@10 -- # set +x 00:14:57.693 12:13:53 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:14:57.693 12:13:53 -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:57.693 12:13:53 -- common/autotest_common.sh@10 -- # set +x 00:14:57.693 12:13:53 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:58.265 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:58.265 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:58.525 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:58.525 12:13:54 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:14:58.525 12:13:54 -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:58.525 12:13:54 -- common/autotest_common.sh@10 -- # set +x 00:14:58.525 12:13:54 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:14:58.525 12:13:54 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:14:58.525 12:13:54 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:14:58.525 12:13:54 -- common/autotest_common.sh@1563 -- # bdfs=() 00:14:58.525 12:13:54 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:14:58.525 12:13:54 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:14:58.525 12:13:54 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:14:58.525 12:13:54 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:14:58.525 12:13:54 -- common/autotest_common.sh@1498 -- # bdfs=() 00:14:58.525 12:13:54 -- common/autotest_common.sh@1498 -- # local bdfs 00:14:58.525 12:13:54 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:14:58.525 12:13:54 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:14:58.525 12:13:54 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:58.525 12:13:54 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:14:58.525 12:13:54 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:14:58.525 12:13:54 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:14:58.525 12:13:54 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:14:58.525 12:13:54 -- common/autotest_common.sh@1566 -- # device=0x0010 00:14:58.525 12:13:54 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:14:58.525 12:13:54 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:14:58.525 12:13:54 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:14:58.525 12:13:54 -- common/autotest_common.sh@1566 -- # device=0x0010 00:14:58.525 12:13:54 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:14:58.525 12:13:54 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:14:58.525 12:13:54 -- common/autotest_common.sh@1572 -- # return 0 00:14:58.525 12:13:54 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:14:58.525 12:13:54 -- common/autotest_common.sh@1580 -- # return 0 00:14:58.525 12:13:54 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:14:58.525 12:13:54 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:14:58.525 12:13:54 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:14:58.525 12:13:54 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:14:58.525 12:13:54 -- spdk/autotest.sh@149 -- # timing_enter lib 00:14:58.525 12:13:54 -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:58.525 12:13:54 -- common/autotest_common.sh@10 -- # set +x 00:14:58.525 12:13:54 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:14:58.525 12:13:54 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:14:58.525 12:13:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:58.525 12:13:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:58.525 12:13:54 -- common/autotest_common.sh@10 -- # set +x 00:14:58.525 ************************************ 00:14:58.525 START TEST env 00:14:58.525 ************************************ 00:14:58.525 12:13:54 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:14:58.786 * Looking for test storage... 00:14:58.786 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:14:58.786 12:13:54 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:58.786 12:13:54 env -- common/autotest_common.sh@1693 -- # lcov --version 00:14:58.786 12:13:54 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:58.786 12:13:54 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:58.786 12:13:54 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:58.786 12:13:54 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:58.786 12:13:54 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:58.786 12:13:54 env -- scripts/common.sh@336 -- # IFS=.-: 00:14:58.786 12:13:54 env -- scripts/common.sh@336 -- # read -ra ver1 00:14:58.786 12:13:54 env -- scripts/common.sh@337 -- # IFS=.-: 00:14:58.786 12:13:54 env -- scripts/common.sh@337 -- # read -ra ver2 00:14:58.786 12:13:54 env -- scripts/common.sh@338 -- # local 'op=<' 00:14:58.786 12:13:54 env -- scripts/common.sh@340 -- # ver1_l=2 00:14:58.786 12:13:54 env -- scripts/common.sh@341 -- # ver2_l=1 00:14:58.786 12:13:54 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:58.786 12:13:54 env -- scripts/common.sh@344 -- # case "$op" in 00:14:58.786 12:13:54 env -- scripts/common.sh@345 -- # : 1 00:14:58.786 12:13:54 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:58.786 12:13:54 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:58.786 12:13:54 env -- scripts/common.sh@365 -- # decimal 1 00:14:58.786 12:13:54 env -- scripts/common.sh@353 -- # local d=1 00:14:58.786 12:13:54 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:58.786 12:13:54 env -- scripts/common.sh@355 -- # echo 1 00:14:58.786 12:13:54 env -- scripts/common.sh@365 -- # ver1[v]=1 00:14:58.786 12:13:54 env -- scripts/common.sh@366 -- # decimal 2 00:14:58.786 12:13:54 env -- scripts/common.sh@353 -- # local d=2 00:14:58.786 12:13:54 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:58.786 12:13:54 env -- scripts/common.sh@355 -- # echo 2 00:14:58.786 12:13:54 env -- scripts/common.sh@366 -- # ver2[v]=2 00:14:58.786 12:13:54 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:58.786 12:13:54 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:58.786 12:13:54 env -- scripts/common.sh@368 -- # return 0 00:14:58.786 12:13:54 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:58.786 12:13:54 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:58.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.786 --rc genhtml_branch_coverage=1 00:14:58.786 --rc genhtml_function_coverage=1 00:14:58.786 --rc genhtml_legend=1 00:14:58.786 --rc geninfo_all_blocks=1 00:14:58.786 --rc geninfo_unexecuted_blocks=1 00:14:58.786 00:14:58.786 ' 00:14:58.786 12:13:54 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:58.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.786 --rc genhtml_branch_coverage=1 00:14:58.786 --rc genhtml_function_coverage=1 00:14:58.786 --rc genhtml_legend=1 00:14:58.786 --rc geninfo_all_blocks=1 00:14:58.786 --rc geninfo_unexecuted_blocks=1 00:14:58.786 00:14:58.786 ' 00:14:58.786 12:13:54 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:58.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.786 --rc genhtml_branch_coverage=1 00:14:58.786 --rc genhtml_function_coverage=1 00:14:58.786 --rc genhtml_legend=1 00:14:58.786 --rc geninfo_all_blocks=1 00:14:58.786 --rc geninfo_unexecuted_blocks=1 00:14:58.786 00:14:58.786 ' 00:14:58.786 12:13:54 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:58.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:58.786 --rc genhtml_branch_coverage=1 00:14:58.786 --rc genhtml_function_coverage=1 00:14:58.786 --rc genhtml_legend=1 00:14:58.786 --rc geninfo_all_blocks=1 00:14:58.786 --rc geninfo_unexecuted_blocks=1 00:14:58.786 00:14:58.786 ' 00:14:58.786 12:13:54 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:14:58.786 12:13:54 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:58.786 12:13:54 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:58.786 12:13:54 env -- common/autotest_common.sh@10 -- # set +x 00:14:58.786 ************************************ 00:14:58.786 START TEST env_memory 00:14:58.786 ************************************ 00:14:58.786 12:13:54 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:14:58.786 00:14:58.786 00:14:58.786 CUnit - A unit testing framework for C - Version 2.1-3 00:14:58.786 http://cunit.sourceforge.net/ 00:14:58.786 00:14:58.786 00:14:58.786 Suite: memory 00:14:58.786 Test: alloc and free memory map ...[2024-11-25 12:13:54.537937] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:14:58.786 passed 00:14:58.786 Test: mem map translation ...[2024-11-25 12:13:54.561706] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:14:58.786 [2024-11-25 12:13:54.561886] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:14:58.786 [2024-11-25 12:13:54.561986] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:14:58.786 [2024-11-25 12:13:54.562033] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:14:58.786 passed 00:14:58.786 Test: mem map registration ...[2024-11-25 12:13:54.613109] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:14:58.786 [2024-11-25 12:13:54.613287] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:14:58.786 passed 00:14:59.047 Test: mem map adjacent registrations ...passed 00:14:59.047 00:14:59.047 Run Summary: Type Total Ran Passed Failed Inactive 00:14:59.047 suites 1 1 n/a 0 0 00:14:59.047 tests 4 4 4 0 0 00:14:59.047 asserts 152 152 152 0 n/a 00:14:59.047 00:14:59.047 Elapsed time = 0.168 seconds 00:14:59.047 00:14:59.047 real 0m0.185s 00:14:59.047 user 0m0.172s 00:14:59.047 sys 0m0.008s 00:14:59.047 12:13:54 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:59.047 12:13:54 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:14:59.047 ************************************ 00:14:59.047 END TEST env_memory 00:14:59.047 ************************************ 00:14:59.047 12:13:54 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:14:59.047 12:13:54 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:59.047 12:13:54 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:59.047 12:13:54 env -- common/autotest_common.sh@10 -- # set +x 00:14:59.047 ************************************ 00:14:59.047 START TEST env_vtophys 00:14:59.047 ************************************ 00:14:59.047 12:13:54 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:14:59.047 EAL: lib.eal log level changed from notice to debug 00:14:59.047 EAL: Detected lcore 0 as core 0 on socket 0 00:14:59.047 EAL: Detected lcore 1 as core 0 on socket 0 00:14:59.047 EAL: Detected lcore 2 as core 0 on socket 0 00:14:59.047 EAL: Detected lcore 3 as core 0 on socket 0 00:14:59.047 EAL: Detected lcore 4 as core 0 on socket 0 00:14:59.047 EAL: Detected lcore 5 as core 0 on socket 0 00:14:59.047 EAL: Detected lcore 6 as core 0 on socket 0 00:14:59.047 EAL: Detected lcore 7 as core 0 on socket 0 00:14:59.047 EAL: Detected lcore 8 as core 0 on socket 0 00:14:59.047 EAL: Detected lcore 9 as core 0 on socket 0 00:14:59.047 EAL: Maximum logical cores by configuration: 128 00:14:59.047 EAL: Detected CPU lcores: 10 00:14:59.047 EAL: Detected NUMA nodes: 1 00:14:59.047 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:14:59.047 EAL: Detected shared linkage of DPDK 00:14:59.047 EAL: No shared files mode enabled, IPC will be disabled 00:14:59.047 EAL: Selected IOVA mode 'PA' 00:14:59.047 EAL: Probing VFIO support... 00:14:59.047 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:14:59.047 EAL: VFIO modules not loaded, skipping VFIO support... 00:14:59.047 EAL: Ask a virtual area of 0x2e000 bytes 00:14:59.047 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:14:59.047 EAL: Setting up physically contiguous memory... 00:14:59.047 EAL: Setting maximum number of open files to 524288 00:14:59.047 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:14:59.047 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:14:59.047 EAL: Ask a virtual area of 0x61000 bytes 00:14:59.047 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:14:59.047 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:14:59.047 EAL: Ask a virtual area of 0x400000000 bytes 00:14:59.047 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:14:59.047 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:14:59.047 EAL: Ask a virtual area of 0x61000 bytes 00:14:59.047 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:14:59.047 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:14:59.047 EAL: Ask a virtual area of 0x400000000 bytes 00:14:59.047 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:14:59.047 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:14:59.047 EAL: Ask a virtual area of 0x61000 bytes 00:14:59.047 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:14:59.047 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:14:59.047 EAL: Ask a virtual area of 0x400000000 bytes 00:14:59.047 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:14:59.047 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:14:59.047 EAL: Ask a virtual area of 0x61000 bytes 00:14:59.047 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:14:59.048 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:14:59.048 EAL: Ask a virtual area of 0x400000000 bytes 00:14:59.048 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:14:59.048 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:14:59.048 EAL: Hugepages will be freed exactly as allocated. 00:14:59.048 EAL: No shared files mode enabled, IPC is disabled 00:14:59.048 EAL: No shared files mode enabled, IPC is disabled 00:14:59.048 EAL: TSC frequency is ~2600000 KHz 00:14:59.048 EAL: Main lcore 0 is ready (tid=7fee19cb4a00;cpuset=[0]) 00:14:59.048 EAL: Trying to obtain current memory policy. 00:14:59.048 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:59.048 EAL: Restoring previous memory policy: 0 00:14:59.048 EAL: request: mp_malloc_sync 00:14:59.048 EAL: No shared files mode enabled, IPC is disabled 00:14:59.048 EAL: Heap on socket 0 was expanded by 2MB 00:14:59.048 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:14:59.048 EAL: No PCI address specified using 'addr=' in: bus=pci 00:14:59.048 EAL: Mem event callback 'spdk:(nil)' registered 00:14:59.048 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:14:59.048 00:14:59.048 00:14:59.048 CUnit - A unit testing framework for C - Version 2.1-3 00:14:59.048 http://cunit.sourceforge.net/ 00:14:59.048 00:14:59.048 00:14:59.048 Suite: components_suite 00:14:59.048 Test: vtophys_malloc_test ...passed 00:14:59.048 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:14:59.048 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:59.048 EAL: Restoring previous memory policy: 4 00:14:59.048 EAL: Calling mem event callback 'spdk:(nil)' 00:14:59.048 EAL: request: mp_malloc_sync 00:14:59.048 EAL: No shared files mode enabled, IPC is disabled 00:14:59.048 EAL: Heap on socket 0 was expanded by 4MB 00:14:59.048 EAL: Calling mem event callback 'spdk:(nil)' 00:14:59.048 EAL: request: mp_malloc_sync 00:14:59.048 EAL: No shared files mode enabled, IPC is disabled 00:14:59.048 EAL: Heap on socket 0 was shrunk by 4MB 00:14:59.048 EAL: Trying to obtain current memory policy. 00:14:59.048 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:59.048 EAL: Restoring previous memory policy: 4 00:14:59.048 EAL: Calling mem event callback 'spdk:(nil)' 00:14:59.048 EAL: request: mp_malloc_sync 00:14:59.048 EAL: No shared files mode enabled, IPC is disabled 00:14:59.048 EAL: Heap on socket 0 was expanded by 6MB 00:14:59.048 EAL: Calling mem event callback 'spdk:(nil)' 00:14:59.048 EAL: request: mp_malloc_sync 00:14:59.048 EAL: No shared files mode enabled, IPC is disabled 00:14:59.048 EAL: Heap on socket 0 was shrunk by 6MB 00:14:59.048 EAL: Trying to obtain current memory policy. 00:14:59.048 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:59.048 EAL: Restoring previous memory policy: 4 00:14:59.048 EAL: Calling mem event callback 'spdk:(nil)' 00:14:59.048 EAL: request: mp_malloc_sync 00:14:59.048 EAL: No shared files mode enabled, IPC is disabled 00:14:59.048 EAL: Heap on socket 0 was expanded by 10MB 00:14:59.048 EAL: Calling mem event callback 'spdk:(nil)' 00:14:59.048 EAL: request: mp_malloc_sync 00:14:59.048 EAL: No shared files mode enabled, IPC is disabled 00:14:59.048 EAL: Heap on socket 0 was shrunk by 10MB 00:14:59.048 EAL: Trying to obtain current memory policy. 00:14:59.048 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:59.048 EAL: Restoring previous memory policy: 4 00:14:59.048 EAL: Calling mem event callback 'spdk:(nil)' 00:14:59.048 EAL: request: mp_malloc_sync 00:14:59.048 EAL: No shared files mode enabled, IPC is disabled 00:14:59.048 EAL: Heap on socket 0 was expanded by 18MB 00:14:59.048 EAL: Calling mem event callback 'spdk:(nil)' 00:14:59.048 EAL: request: mp_malloc_sync 00:14:59.048 EAL: No shared files mode enabled, IPC is disabled 00:14:59.048 EAL: Heap on socket 0 was shrunk by 18MB 00:14:59.048 EAL: Trying to obtain current memory policy. 00:14:59.048 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:59.048 EAL: Restoring previous memory policy: 4 00:14:59.048 EAL: Calling mem event callback 'spdk:(nil)' 00:14:59.048 EAL: request: mp_malloc_sync 00:14:59.048 EAL: No shared files mode enabled, IPC is disabled 00:14:59.048 EAL: Heap on socket 0 was expanded by 34MB 00:14:59.048 EAL: Calling mem event callback 'spdk:(nil)' 00:14:59.048 EAL: request: mp_malloc_sync 00:14:59.048 EAL: No shared files mode enabled, IPC is disabled 00:14:59.048 EAL: Heap on socket 0 was shrunk by 34MB 00:14:59.048 EAL: Trying to obtain current memory policy. 00:14:59.048 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:59.048 EAL: Restoring previous memory policy: 4 00:14:59.048 EAL: Calling mem event callback 'spdk:(nil)' 00:14:59.048 EAL: request: mp_malloc_sync 00:14:59.048 EAL: No shared files mode enabled, IPC is disabled 00:14:59.048 EAL: Heap on socket 0 was expanded by 66MB 00:14:59.048 EAL: Calling mem event callback 'spdk:(nil)' 00:14:59.309 EAL: request: mp_malloc_sync 00:14:59.309 EAL: No shared files mode enabled, IPC is disabled 00:14:59.309 EAL: Heap on socket 0 was shrunk by 66MB 00:14:59.309 EAL: Trying to obtain current memory policy. 00:14:59.309 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:59.309 EAL: Restoring previous memory policy: 4 00:14:59.309 EAL: Calling mem event callback 'spdk:(nil)' 00:14:59.309 EAL: request: mp_malloc_sync 00:14:59.309 EAL: No shared files mode enabled, IPC is disabled 00:14:59.309 EAL: Heap on socket 0 was expanded by 130MB 00:14:59.309 EAL: Calling mem event callback 'spdk:(nil)' 00:14:59.309 EAL: request: mp_malloc_sync 00:14:59.309 EAL: No shared files mode enabled, IPC is disabled 00:14:59.309 EAL: Heap on socket 0 was shrunk by 130MB 00:14:59.309 EAL: Trying to obtain current memory policy. 00:14:59.309 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:59.309 EAL: Restoring previous memory policy: 4 00:14:59.309 EAL: Calling mem event callback 'spdk:(nil)' 00:14:59.309 EAL: request: mp_malloc_sync 00:14:59.309 EAL: No shared files mode enabled, IPC is disabled 00:14:59.309 EAL: Heap on socket 0 was expanded by 258MB 00:14:59.309 EAL: Calling mem event callback 'spdk:(nil)' 00:14:59.309 EAL: request: mp_malloc_sync 00:14:59.309 EAL: No shared files mode enabled, IPC is disabled 00:14:59.309 EAL: Heap on socket 0 was shrunk by 258MB 00:14:59.309 EAL: Trying to obtain current memory policy. 00:14:59.309 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:59.571 EAL: Restoring previous memory policy: 4 00:14:59.571 EAL: Calling mem event callback 'spdk:(nil)' 00:14:59.571 EAL: request: mp_malloc_sync 00:14:59.571 EAL: No shared files mode enabled, IPC is disabled 00:14:59.571 EAL: Heap on socket 0 was expanded by 514MB 00:14:59.571 EAL: Calling mem event callback 'spdk:(nil)' 00:14:59.571 EAL: request: mp_malloc_sync 00:14:59.571 EAL: No shared files mode enabled, IPC is disabled 00:14:59.571 EAL: Heap on socket 0 was shrunk by 514MB 00:14:59.571 EAL: Trying to obtain current memory policy. 00:14:59.571 EAL: Setting policy MPOL_PREFERRED for socket 0 00:14:59.832 EAL: Restoring previous memory policy: 4 00:14:59.832 EAL: Calling mem event callback 'spdk:(nil)' 00:14:59.832 EAL: request: mp_malloc_sync 00:14:59.832 EAL: No shared files mode enabled, IPC is disabled 00:14:59.832 EAL: Heap on socket 0 was expanded by 1026MB 00:14:59.832 EAL: Calling mem event callback 'spdk:(nil)' 00:15:00.093 passed 00:15:00.093 00:15:00.093 Run Summary: Type Total Ran Passed Failed Inactive 00:15:00.093 suites 1 1 n/a 0 0 00:15:00.093 tests 2 2 2 0 0 00:15:00.093 asserts 5568 5568 5568 0 n/a 00:15:00.093 00:15:00.093 Elapsed time = 0.899 seconds 00:15:00.093 EAL: request: mp_malloc_sync 00:15:00.093 EAL: No shared files mode enabled, IPC is disabled 00:15:00.093 EAL: Heap on socket 0 was shrunk by 1026MB 00:15:00.093 EAL: Calling mem event callback 'spdk:(nil)' 00:15:00.093 EAL: request: mp_malloc_sync 00:15:00.093 EAL: No shared files mode enabled, IPC is disabled 00:15:00.093 EAL: Heap on socket 0 was shrunk by 2MB 00:15:00.093 EAL: No shared files mode enabled, IPC is disabled 00:15:00.093 EAL: No shared files mode enabled, IPC is disabled 00:15:00.093 EAL: No shared files mode enabled, IPC is disabled 00:15:00.093 00:15:00.093 real 0m1.094s 00:15:00.093 user 0m0.544s 00:15:00.093 sys 0m0.411s 00:15:00.093 12:13:55 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:00.093 ************************************ 00:15:00.093 END TEST env_vtophys 00:15:00.093 ************************************ 00:15:00.093 12:13:55 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:15:00.093 12:13:55 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:15:00.093 12:13:55 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:00.093 12:13:55 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:00.093 12:13:55 env -- common/autotest_common.sh@10 -- # set +x 00:15:00.093 ************************************ 00:15:00.093 START TEST env_pci 00:15:00.093 ************************************ 00:15:00.093 12:13:55 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:15:00.093 00:15:00.093 00:15:00.093 CUnit - A unit testing framework for C - Version 2.1-3 00:15:00.093 http://cunit.sourceforge.net/ 00:15:00.093 00:15:00.093 00:15:00.093 Suite: pci 00:15:00.093 Test: pci_hook ...[2024-11-25 12:13:55.876921] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57909 has claimed it 00:15:00.093 passed 00:15:00.093 00:15:00.093 Run Summary: Type Total Ran Passed Failed Inactive 00:15:00.093 suites 1 1 n/a 0 0 00:15:00.093 tests 1 1 1 0 0 00:15:00.093 asserts 25 25 25 0 n/a 00:15:00.093 00:15:00.093 Elapsed time = 0.002 seconds 00:15:00.093 EAL: Cannot find device (10000:00:01.0) 00:15:00.093 EAL: Failed to attach device on primary process 00:15:00.093 00:15:00.093 real 0m0.015s 00:15:00.093 user 0m0.008s 00:15:00.093 sys 0m0.007s 00:15:00.093 ************************************ 00:15:00.093 END TEST env_pci 00:15:00.093 ************************************ 00:15:00.093 12:13:55 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:00.093 12:13:55 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:15:00.093 12:13:55 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:15:00.093 12:13:55 env -- env/env.sh@15 -- # uname 00:15:00.093 12:13:55 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:15:00.093 12:13:55 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:15:00.093 12:13:55 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:15:00.093 12:13:55 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:00.093 12:13:55 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:00.093 12:13:55 env -- common/autotest_common.sh@10 -- # set +x 00:15:00.093 ************************************ 00:15:00.093 START TEST env_dpdk_post_init 00:15:00.093 ************************************ 00:15:00.093 12:13:55 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:15:00.093 EAL: Detected CPU lcores: 10 00:15:00.093 EAL: Detected NUMA nodes: 1 00:15:00.093 EAL: Detected shared linkage of DPDK 00:15:00.093 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:15:00.093 EAL: Selected IOVA mode 'PA' 00:15:00.356 TELEMETRY: No legacy callbacks, legacy socket not created 00:15:00.356 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:15:00.356 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:15:00.356 Starting DPDK initialization... 00:15:00.356 Starting SPDK post initialization... 00:15:00.356 SPDK NVMe probe 00:15:00.356 Attaching to 0000:00:10.0 00:15:00.356 Attaching to 0000:00:11.0 00:15:00.356 Attached to 0000:00:10.0 00:15:00.356 Attached to 0000:00:11.0 00:15:00.356 Cleaning up... 00:15:00.356 00:15:00.356 real 0m0.174s 00:15:00.356 user 0m0.045s 00:15:00.356 sys 0m0.028s 00:15:00.356 12:13:56 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:00.356 12:13:56 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:15:00.356 ************************************ 00:15:00.356 END TEST env_dpdk_post_init 00:15:00.356 ************************************ 00:15:00.356 12:13:56 env -- env/env.sh@26 -- # uname 00:15:00.356 12:13:56 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:15:00.356 12:13:56 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:15:00.356 12:13:56 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:00.356 12:13:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:00.356 12:13:56 env -- common/autotest_common.sh@10 -- # set +x 00:15:00.356 ************************************ 00:15:00.356 START TEST env_mem_callbacks 00:15:00.356 ************************************ 00:15:00.356 12:13:56 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:15:00.356 EAL: Detected CPU lcores: 10 00:15:00.356 EAL: Detected NUMA nodes: 1 00:15:00.356 EAL: Detected shared linkage of DPDK 00:15:00.356 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:15:00.356 EAL: Selected IOVA mode 'PA' 00:15:00.619 TELEMETRY: No legacy callbacks, legacy socket not created 00:15:00.619 00:15:00.619 00:15:00.619 CUnit - A unit testing framework for C - Version 2.1-3 00:15:00.619 http://cunit.sourceforge.net/ 00:15:00.619 00:15:00.619 00:15:00.619 Suite: memory 00:15:00.619 Test: test ... 00:15:00.619 register 0x200000200000 2097152 00:15:00.619 malloc 3145728 00:15:00.619 register 0x200000400000 4194304 00:15:00.619 buf 0x200000500000 len 3145728 PASSED 00:15:00.619 malloc 64 00:15:00.619 buf 0x2000004fff40 len 64 PASSED 00:15:00.619 malloc 4194304 00:15:00.619 register 0x200000800000 6291456 00:15:00.619 buf 0x200000a00000 len 4194304 PASSED 00:15:00.619 free 0x200000500000 3145728 00:15:00.619 free 0x2000004fff40 64 00:15:00.619 unregister 0x200000400000 4194304 PASSED 00:15:00.619 free 0x200000a00000 4194304 00:15:00.619 unregister 0x200000800000 6291456 PASSED 00:15:00.619 malloc 8388608 00:15:00.619 register 0x200000400000 10485760 00:15:00.619 buf 0x200000600000 len 8388608 PASSED 00:15:00.619 free 0x200000600000 8388608 00:15:00.619 unregister 0x200000400000 10485760 PASSED 00:15:00.619 passed 00:15:00.619 00:15:00.619 Run Summary: Type Total Ran Passed Failed Inactive 00:15:00.619 suites 1 1 n/a 0 0 00:15:00.619 tests 1 1 1 0 0 00:15:00.619 asserts 15 15 15 0 n/a 00:15:00.619 00:15:00.619 Elapsed time = 0.007 seconds 00:15:00.619 00:15:00.619 real 0m0.133s 00:15:00.619 user 0m0.009s 00:15:00.619 sys 0m0.022s 00:15:00.619 12:13:56 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:00.619 ************************************ 00:15:00.619 END TEST env_mem_callbacks 00:15:00.619 ************************************ 00:15:00.619 12:13:56 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:15:00.619 ************************************ 00:15:00.619 END TEST env 00:15:00.619 ************************************ 00:15:00.619 00:15:00.619 real 0m1.974s 00:15:00.619 user 0m0.931s 00:15:00.619 sys 0m0.683s 00:15:00.619 12:13:56 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:00.619 12:13:56 env -- common/autotest_common.sh@10 -- # set +x 00:15:00.619 12:13:56 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:15:00.619 12:13:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:00.619 12:13:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:00.619 12:13:56 -- common/autotest_common.sh@10 -- # set +x 00:15:00.619 ************************************ 00:15:00.619 START TEST rpc 00:15:00.619 ************************************ 00:15:00.619 12:13:56 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:15:00.619 * Looking for test storage... 00:15:00.619 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:15:00.619 12:13:56 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:00.619 12:13:56 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:15:00.619 12:13:56 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:00.619 12:13:56 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:00.619 12:13:56 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:00.619 12:13:56 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:00.619 12:13:56 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:00.619 12:13:56 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:15:00.619 12:13:56 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:15:00.619 12:13:56 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:15:00.619 12:13:56 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:15:00.619 12:13:56 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:15:00.619 12:13:56 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:15:00.619 12:13:56 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:15:00.619 12:13:56 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:00.619 12:13:56 rpc -- scripts/common.sh@344 -- # case "$op" in 00:15:00.619 12:13:56 rpc -- scripts/common.sh@345 -- # : 1 00:15:00.619 12:13:56 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:00.619 12:13:56 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:00.619 12:13:56 rpc -- scripts/common.sh@365 -- # decimal 1 00:15:00.930 12:13:56 rpc -- scripts/common.sh@353 -- # local d=1 00:15:00.930 12:13:56 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:00.930 12:13:56 rpc -- scripts/common.sh@355 -- # echo 1 00:15:00.930 12:13:56 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:15:00.930 12:13:56 rpc -- scripts/common.sh@366 -- # decimal 2 00:15:00.930 12:13:56 rpc -- scripts/common.sh@353 -- # local d=2 00:15:00.930 12:13:56 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:00.930 12:13:56 rpc -- scripts/common.sh@355 -- # echo 2 00:15:00.930 12:13:56 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:15:00.930 12:13:56 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:00.930 12:13:56 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:00.930 12:13:56 rpc -- scripts/common.sh@368 -- # return 0 00:15:00.930 12:13:56 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:00.930 12:13:56 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:00.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.930 --rc genhtml_branch_coverage=1 00:15:00.930 --rc genhtml_function_coverage=1 00:15:00.930 --rc genhtml_legend=1 00:15:00.930 --rc geninfo_all_blocks=1 00:15:00.930 --rc geninfo_unexecuted_blocks=1 00:15:00.930 00:15:00.930 ' 00:15:00.930 12:13:56 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:00.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.930 --rc genhtml_branch_coverage=1 00:15:00.930 --rc genhtml_function_coverage=1 00:15:00.930 --rc genhtml_legend=1 00:15:00.930 --rc geninfo_all_blocks=1 00:15:00.930 --rc geninfo_unexecuted_blocks=1 00:15:00.930 00:15:00.930 ' 00:15:00.930 12:13:56 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:00.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.930 --rc genhtml_branch_coverage=1 00:15:00.930 --rc genhtml_function_coverage=1 00:15:00.930 --rc genhtml_legend=1 00:15:00.930 --rc geninfo_all_blocks=1 00:15:00.930 --rc geninfo_unexecuted_blocks=1 00:15:00.930 00:15:00.930 ' 00:15:00.930 12:13:56 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:00.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:00.930 --rc genhtml_branch_coverage=1 00:15:00.930 --rc genhtml_function_coverage=1 00:15:00.930 --rc genhtml_legend=1 00:15:00.930 --rc geninfo_all_blocks=1 00:15:00.930 --rc geninfo_unexecuted_blocks=1 00:15:00.930 00:15:00.930 ' 00:15:00.930 12:13:56 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58032 00:15:00.930 12:13:56 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:15:00.930 12:13:56 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58032 00:15:00.930 12:13:56 rpc -- common/autotest_common.sh@835 -- # '[' -z 58032 ']' 00:15:00.930 12:13:56 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:15:00.930 12:13:56 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.930 12:13:56 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:00.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.930 12:13:56 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.930 12:13:56 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:00.930 12:13:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.930 [2024-11-25 12:13:56.545374] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:15:00.930 [2024-11-25 12:13:56.545633] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58032 ] 00:15:00.930 [2024-11-25 12:13:56.691533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.930 [2024-11-25 12:13:56.746690] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:15:00.930 [2024-11-25 12:13:56.746971] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58032' to capture a snapshot of events at runtime. 00:15:00.930 [2024-11-25 12:13:56.747068] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:00.930 [2024-11-25 12:13:56.747115] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:00.930 [2024-11-25 12:13:56.747184] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58032 for offline analysis/debug. 00:15:00.930 [2024-11-25 12:13:56.747898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.895 12:13:57 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:01.895 12:13:57 rpc -- common/autotest_common.sh@868 -- # return 0 00:15:01.895 12:13:57 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:15:01.895 12:13:57 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:15:01.895 12:13:57 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:15:01.895 12:13:57 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:15:01.895 12:13:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:01.895 12:13:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:01.895 12:13:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:15:01.895 ************************************ 00:15:01.895 START TEST rpc_integrity 00:15:01.895 ************************************ 00:15:01.895 12:13:57 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:15:01.895 12:13:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:01.895 12:13:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.895 12:13:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:01.895 12:13:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.895 12:13:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:15:01.895 12:13:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:15:01.895 12:13:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:15:01.895 12:13:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:15:01.895 12:13:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.895 12:13:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:01.895 12:13:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.895 12:13:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:15:01.895 12:13:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:15:01.895 12:13:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.895 12:13:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:01.895 12:13:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.895 12:13:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:15:01.895 { 00:15:01.895 "aliases": [ 00:15:01.895 "b232a994-5536-4779-bee4-d1e5e4a2ccd8" 00:15:01.895 ], 00:15:01.895 "assigned_rate_limits": { 00:15:01.895 "r_mbytes_per_sec": 0, 00:15:01.895 "rw_ios_per_sec": 0, 00:15:01.895 "rw_mbytes_per_sec": 0, 00:15:01.895 "w_mbytes_per_sec": 0 00:15:01.895 }, 00:15:01.895 "block_size": 512, 00:15:01.895 "claimed": false, 00:15:01.895 "driver_specific": {}, 00:15:01.895 "memory_domains": [ 00:15:01.895 { 00:15:01.895 "dma_device_id": "system", 00:15:01.895 "dma_device_type": 1 00:15:01.895 }, 00:15:01.895 { 00:15:01.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.895 "dma_device_type": 2 00:15:01.895 } 00:15:01.895 ], 00:15:01.895 "name": "Malloc0", 00:15:01.895 "num_blocks": 16384, 00:15:01.895 "product_name": "Malloc disk", 00:15:01.895 "supported_io_types": { 00:15:01.895 "abort": true, 00:15:01.895 "compare": false, 00:15:01.895 "compare_and_write": false, 00:15:01.895 "copy": true, 00:15:01.895 "flush": true, 00:15:01.895 "get_zone_info": false, 00:15:01.895 "nvme_admin": false, 00:15:01.895 "nvme_io": false, 00:15:01.895 "nvme_io_md": false, 00:15:01.895 "nvme_iov_md": false, 00:15:01.895 "read": true, 00:15:01.895 "reset": true, 00:15:01.895 "seek_data": false, 00:15:01.895 "seek_hole": false, 00:15:01.895 "unmap": true, 00:15:01.895 "write": true, 00:15:01.895 "write_zeroes": true, 00:15:01.895 "zcopy": true, 00:15:01.895 "zone_append": false, 00:15:01.895 "zone_management": false 00:15:01.895 }, 00:15:01.895 "uuid": "b232a994-5536-4779-bee4-d1e5e4a2ccd8", 00:15:01.895 "zoned": false 00:15:01.895 } 00:15:01.895 ]' 00:15:01.895 12:13:57 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:15:01.895 12:13:57 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:15:01.895 12:13:57 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:15:01.895 12:13:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.895 12:13:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:01.895 [2024-11-25 12:13:57.545018] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:15:01.895 [2024-11-25 12:13:57.545236] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:01.895 [2024-11-25 12:13:57.545257] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x67dba0 00:15:01.895 [2024-11-25 12:13:57.545264] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:01.895 [2024-11-25 12:13:57.546785] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:01.895 [2024-11-25 12:13:57.546814] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:15:01.895 Passthru0 00:15:01.895 12:13:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.895 12:13:57 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:15:01.895 12:13:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.895 12:13:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:01.895 12:13:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.895 12:13:57 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:15:01.895 { 00:15:01.895 "aliases": [ 00:15:01.895 "b232a994-5536-4779-bee4-d1e5e4a2ccd8" 00:15:01.895 ], 00:15:01.895 "assigned_rate_limits": { 00:15:01.895 "r_mbytes_per_sec": 0, 00:15:01.895 "rw_ios_per_sec": 0, 00:15:01.895 "rw_mbytes_per_sec": 0, 00:15:01.895 "w_mbytes_per_sec": 0 00:15:01.895 }, 00:15:01.895 "block_size": 512, 00:15:01.895 "claim_type": "exclusive_write", 00:15:01.895 "claimed": true, 00:15:01.895 "driver_specific": {}, 00:15:01.895 "memory_domains": [ 00:15:01.895 { 00:15:01.895 "dma_device_id": "system", 00:15:01.895 "dma_device_type": 1 00:15:01.895 }, 00:15:01.895 { 00:15:01.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.895 "dma_device_type": 2 00:15:01.895 } 00:15:01.895 ], 00:15:01.895 "name": "Malloc0", 00:15:01.895 "num_blocks": 16384, 00:15:01.895 "product_name": "Malloc disk", 00:15:01.895 "supported_io_types": { 00:15:01.895 "abort": true, 00:15:01.895 "compare": false, 00:15:01.895 "compare_and_write": false, 00:15:01.895 "copy": true, 00:15:01.895 "flush": true, 00:15:01.895 "get_zone_info": false, 00:15:01.895 "nvme_admin": false, 00:15:01.895 "nvme_io": false, 00:15:01.895 "nvme_io_md": false, 00:15:01.895 "nvme_iov_md": false, 00:15:01.895 "read": true, 00:15:01.895 "reset": true, 00:15:01.895 "seek_data": false, 00:15:01.895 "seek_hole": false, 00:15:01.895 "unmap": true, 00:15:01.895 "write": true, 00:15:01.895 "write_zeroes": true, 00:15:01.895 "zcopy": true, 00:15:01.895 "zone_append": false, 00:15:01.895 "zone_management": false 00:15:01.895 }, 00:15:01.895 "uuid": "b232a994-5536-4779-bee4-d1e5e4a2ccd8", 00:15:01.895 "zoned": false 00:15:01.895 }, 00:15:01.895 { 00:15:01.895 "aliases": [ 00:15:01.895 "e4ef7608-2847-5535-8c48-4e9f0d34db29" 00:15:01.895 ], 00:15:01.895 "assigned_rate_limits": { 00:15:01.895 "r_mbytes_per_sec": 0, 00:15:01.895 "rw_ios_per_sec": 0, 00:15:01.895 "rw_mbytes_per_sec": 0, 00:15:01.895 "w_mbytes_per_sec": 0 00:15:01.895 }, 00:15:01.895 "block_size": 512, 00:15:01.895 "claimed": false, 00:15:01.895 "driver_specific": { 00:15:01.895 "passthru": { 00:15:01.895 "base_bdev_name": "Malloc0", 00:15:01.895 "name": "Passthru0" 00:15:01.895 } 00:15:01.895 }, 00:15:01.895 "memory_domains": [ 00:15:01.895 { 00:15:01.895 "dma_device_id": "system", 00:15:01.895 "dma_device_type": 1 00:15:01.895 }, 00:15:01.895 { 00:15:01.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.895 "dma_device_type": 2 00:15:01.895 } 00:15:01.895 ], 00:15:01.895 "name": "Passthru0", 00:15:01.895 "num_blocks": 16384, 00:15:01.895 "product_name": "passthru", 00:15:01.895 "supported_io_types": { 00:15:01.895 "abort": true, 00:15:01.895 "compare": false, 00:15:01.895 "compare_and_write": false, 00:15:01.895 "copy": true, 00:15:01.895 "flush": true, 00:15:01.895 "get_zone_info": false, 00:15:01.895 "nvme_admin": false, 00:15:01.895 "nvme_io": false, 00:15:01.895 "nvme_io_md": false, 00:15:01.895 "nvme_iov_md": false, 00:15:01.895 "read": true, 00:15:01.895 "reset": true, 00:15:01.895 "seek_data": false, 00:15:01.895 "seek_hole": false, 00:15:01.895 "unmap": true, 00:15:01.895 "write": true, 00:15:01.895 "write_zeroes": true, 00:15:01.895 "zcopy": true, 00:15:01.895 "zone_append": false, 00:15:01.895 "zone_management": false 00:15:01.895 }, 00:15:01.895 "uuid": "e4ef7608-2847-5535-8c48-4e9f0d34db29", 00:15:01.895 "zoned": false 00:15:01.895 } 00:15:01.895 ]' 00:15:01.895 12:13:57 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:15:01.895 12:13:57 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:15:01.895 12:13:57 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:15:01.895 12:13:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.895 12:13:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:01.895 12:13:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.896 12:13:57 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:01.896 12:13:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.896 12:13:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:01.896 12:13:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.896 12:13:57 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:01.896 12:13:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.896 12:13:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:01.896 12:13:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.896 12:13:57 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:15:01.896 12:13:57 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:15:01.896 ************************************ 00:15:01.896 END TEST rpc_integrity 00:15:01.896 ************************************ 00:15:01.896 12:13:57 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:15:01.896 00:15:01.896 real 0m0.220s 00:15:01.896 user 0m0.121s 00:15:01.896 sys 0m0.033s 00:15:01.896 12:13:57 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:01.896 12:13:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:01.896 12:13:57 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:15:01.896 12:13:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:01.896 12:13:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:01.896 12:13:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:15:01.896 ************************************ 00:15:01.896 START TEST rpc_plugins 00:15:01.896 ************************************ 00:15:01.896 12:13:57 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:15:01.896 12:13:57 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:15:01.896 12:13:57 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.896 12:13:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:15:01.896 12:13:57 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.896 12:13:57 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:15:01.896 12:13:57 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:15:01.896 12:13:57 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.896 12:13:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:15:01.896 12:13:57 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.896 12:13:57 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:15:01.896 { 00:15:01.896 "aliases": [ 00:15:01.896 "c4e66d48-1274-4d69-b1d0-ae4a72cd369b" 00:15:01.896 ], 00:15:01.896 "assigned_rate_limits": { 00:15:01.896 "r_mbytes_per_sec": 0, 00:15:01.896 "rw_ios_per_sec": 0, 00:15:01.896 "rw_mbytes_per_sec": 0, 00:15:01.896 "w_mbytes_per_sec": 0 00:15:01.896 }, 00:15:01.896 "block_size": 4096, 00:15:01.896 "claimed": false, 00:15:01.896 "driver_specific": {}, 00:15:01.896 "memory_domains": [ 00:15:01.896 { 00:15:01.896 "dma_device_id": "system", 00:15:01.896 "dma_device_type": 1 00:15:01.896 }, 00:15:01.896 { 00:15:01.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.896 "dma_device_type": 2 00:15:01.896 } 00:15:01.896 ], 00:15:01.896 "name": "Malloc1", 00:15:01.896 "num_blocks": 256, 00:15:01.896 "product_name": "Malloc disk", 00:15:01.896 "supported_io_types": { 00:15:01.896 "abort": true, 00:15:01.896 "compare": false, 00:15:01.896 "compare_and_write": false, 00:15:01.896 "copy": true, 00:15:01.896 "flush": true, 00:15:01.896 "get_zone_info": false, 00:15:01.896 "nvme_admin": false, 00:15:01.896 "nvme_io": false, 00:15:01.896 "nvme_io_md": false, 00:15:01.896 "nvme_iov_md": false, 00:15:01.896 "read": true, 00:15:01.896 "reset": true, 00:15:01.896 "seek_data": false, 00:15:01.896 "seek_hole": false, 00:15:01.896 "unmap": true, 00:15:01.896 "write": true, 00:15:01.896 "write_zeroes": true, 00:15:01.896 "zcopy": true, 00:15:01.896 "zone_append": false, 00:15:01.896 "zone_management": false 00:15:01.896 }, 00:15:01.896 "uuid": "c4e66d48-1274-4d69-b1d0-ae4a72cd369b", 00:15:01.896 "zoned": false 00:15:01.896 } 00:15:01.896 ]' 00:15:01.896 12:13:57 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:15:01.896 12:13:57 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:15:01.896 12:13:57 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:15:01.896 12:13:57 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.896 12:13:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:15:02.154 12:13:57 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.154 12:13:57 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:15:02.154 12:13:57 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.154 12:13:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:15:02.154 12:13:57 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.154 12:13:57 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:15:02.154 12:13:57 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:15:02.154 ************************************ 00:15:02.154 END TEST rpc_plugins 00:15:02.154 ************************************ 00:15:02.154 12:13:57 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:15:02.154 00:15:02.154 real 0m0.112s 00:15:02.154 user 0m0.063s 00:15:02.154 sys 0m0.015s 00:15:02.154 12:13:57 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:02.154 12:13:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:15:02.154 12:13:57 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:15:02.154 12:13:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:02.154 12:13:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:02.154 12:13:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.154 ************************************ 00:15:02.154 START TEST rpc_trace_cmd_test 00:15:02.154 ************************************ 00:15:02.154 12:13:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:15:02.154 12:13:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:15:02.154 12:13:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:15:02.154 12:13:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.154 12:13:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.154 12:13:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.154 12:13:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:15:02.154 "bdev": { 00:15:02.154 "mask": "0x8", 00:15:02.154 "tpoint_mask": "0xffffffffffffffff" 00:15:02.154 }, 00:15:02.154 "bdev_nvme": { 00:15:02.154 "mask": "0x4000", 00:15:02.154 "tpoint_mask": "0x0" 00:15:02.154 }, 00:15:02.154 "bdev_raid": { 00:15:02.154 "mask": "0x20000", 00:15:02.154 "tpoint_mask": "0x0" 00:15:02.154 }, 00:15:02.154 "blob": { 00:15:02.154 "mask": "0x10000", 00:15:02.154 "tpoint_mask": "0x0" 00:15:02.154 }, 00:15:02.154 "blobfs": { 00:15:02.154 "mask": "0x80", 00:15:02.154 "tpoint_mask": "0x0" 00:15:02.154 }, 00:15:02.154 "dsa": { 00:15:02.154 "mask": "0x200", 00:15:02.154 "tpoint_mask": "0x0" 00:15:02.154 }, 00:15:02.154 "ftl": { 00:15:02.154 "mask": "0x40", 00:15:02.154 "tpoint_mask": "0x0" 00:15:02.154 }, 00:15:02.154 "iaa": { 00:15:02.154 "mask": "0x1000", 00:15:02.154 "tpoint_mask": "0x0" 00:15:02.154 }, 00:15:02.154 "iscsi_conn": { 00:15:02.154 "mask": "0x2", 00:15:02.154 "tpoint_mask": "0x0" 00:15:02.154 }, 00:15:02.154 "nvme_pcie": { 00:15:02.154 "mask": "0x800", 00:15:02.154 "tpoint_mask": "0x0" 00:15:02.154 }, 00:15:02.154 "nvme_tcp": { 00:15:02.154 "mask": "0x2000", 00:15:02.154 "tpoint_mask": "0x0" 00:15:02.154 }, 00:15:02.154 "nvmf_rdma": { 00:15:02.154 "mask": "0x10", 00:15:02.154 "tpoint_mask": "0x0" 00:15:02.154 }, 00:15:02.154 "nvmf_tcp": { 00:15:02.154 "mask": "0x20", 00:15:02.154 "tpoint_mask": "0x0" 00:15:02.154 }, 00:15:02.154 "scheduler": { 00:15:02.154 "mask": "0x40000", 00:15:02.154 "tpoint_mask": "0x0" 00:15:02.154 }, 00:15:02.154 "scsi": { 00:15:02.154 "mask": "0x4", 00:15:02.154 "tpoint_mask": "0x0" 00:15:02.154 }, 00:15:02.154 "sock": { 00:15:02.154 "mask": "0x8000", 00:15:02.154 "tpoint_mask": "0x0" 00:15:02.154 }, 00:15:02.154 "thread": { 00:15:02.154 "mask": "0x400", 00:15:02.154 "tpoint_mask": "0x0" 00:15:02.154 }, 00:15:02.154 "tpoint_group_mask": "0x8", 00:15:02.154 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58032" 00:15:02.154 }' 00:15:02.154 12:13:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:15:02.154 12:13:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:15:02.154 12:13:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:15:02.154 12:13:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:15:02.154 12:13:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:15:02.154 12:13:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:15:02.154 12:13:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:15:02.154 12:13:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:15:02.154 12:13:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:15:02.154 ************************************ 00:15:02.154 END TEST rpc_trace_cmd_test 00:15:02.154 ************************************ 00:15:02.154 12:13:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:15:02.154 00:15:02.154 real 0m0.173s 00:15:02.154 user 0m0.145s 00:15:02.154 sys 0m0.020s 00:15:02.154 12:13:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:02.154 12:13:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.413 12:13:58 rpc -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:15:02.413 12:13:58 rpc -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:15:02.413 12:13:58 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:02.413 12:13:58 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:02.413 12:13:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.413 ************************************ 00:15:02.413 START TEST go_rpc 00:15:02.413 ************************************ 00:15:02.413 12:13:58 rpc.go_rpc -- common/autotest_common.sh@1129 -- # go_rpc 00:15:02.413 12:13:58 rpc.go_rpc -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:15:02.413 12:13:58 rpc.go_rpc -- rpc/rpc.sh@51 -- # bdevs='[]' 00:15:02.413 12:13:58 rpc.go_rpc -- rpc/rpc.sh@52 -- # jq length 00:15:02.413 12:13:58 rpc.go_rpc -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:15:02.413 12:13:58 rpc.go_rpc -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:15:02.413 12:13:58 rpc.go_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.413 12:13:58 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.413 12:13:58 rpc.go_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.413 12:13:58 rpc.go_rpc -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:15:02.413 12:13:58 rpc.go_rpc -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:15:02.413 12:13:58 rpc.go_rpc -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["5a59e0e7-5b2e-46db-bbc7-1a98ab9c0aef"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"copy":true,"flush":true,"get_zone_info":false,"nvme_admin":false,"nvme_io":false,"nvme_io_md":false,"nvme_iov_md":false,"read":true,"reset":true,"seek_data":false,"seek_hole":false,"unmap":true,"write":true,"write_zeroes":true,"zcopy":true,"zone_append":false,"zone_management":false},"uuid":"5a59e0e7-5b2e-46db-bbc7-1a98ab9c0aef","zoned":false}]' 00:15:02.413 12:13:58 rpc.go_rpc -- rpc/rpc.sh@57 -- # jq length 00:15:02.413 12:13:58 rpc.go_rpc -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:15:02.413 12:13:58 rpc.go_rpc -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:15:02.413 12:13:58 rpc.go_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.413 12:13:58 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.413 12:13:58 rpc.go_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.413 12:13:58 rpc.go_rpc -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:15:02.413 12:13:58 rpc.go_rpc -- rpc/rpc.sh@60 -- # bdevs='[]' 00:15:02.413 12:13:58 rpc.go_rpc -- rpc/rpc.sh@61 -- # jq length 00:15:02.413 12:13:58 rpc.go_rpc -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:15:02.413 00:15:02.413 real 0m0.150s 00:15:02.413 user 0m0.082s 00:15:02.413 sys 0m0.033s 00:15:02.413 12:13:58 rpc.go_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:02.413 ************************************ 00:15:02.413 12:13:58 rpc.go_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.413 END TEST go_rpc 00:15:02.413 ************************************ 00:15:02.413 12:13:58 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:15:02.413 12:13:58 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:15:02.413 12:13:58 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:02.413 12:13:58 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:02.413 12:13:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.413 ************************************ 00:15:02.413 START TEST rpc_daemon_integrity 00:15:02.413 ************************************ 00:15:02.413 12:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:15:02.413 12:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:02.413 12:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.413 12:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:02.414 12:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.414 12:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:15:02.414 12:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:15:02.414 12:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:15:02.672 12:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:15:02.672 12:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.672 12:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:02.672 12:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.672 12:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:15:02.672 12:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:15:02.672 12:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.672 12:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:02.672 12:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.672 12:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:15:02.672 { 00:15:02.672 "aliases": [ 00:15:02.672 "88598e91-f1c9-4800-95d0-577f8c6a59a4" 00:15:02.672 ], 00:15:02.672 "assigned_rate_limits": { 00:15:02.672 "r_mbytes_per_sec": 0, 00:15:02.672 "rw_ios_per_sec": 0, 00:15:02.672 "rw_mbytes_per_sec": 0, 00:15:02.672 "w_mbytes_per_sec": 0 00:15:02.672 }, 00:15:02.672 "block_size": 512, 00:15:02.672 "claimed": false, 00:15:02.672 "driver_specific": {}, 00:15:02.672 "memory_domains": [ 00:15:02.672 { 00:15:02.672 "dma_device_id": "system", 00:15:02.672 "dma_device_type": 1 00:15:02.672 }, 00:15:02.672 { 00:15:02.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.672 "dma_device_type": 2 00:15:02.672 } 00:15:02.672 ], 00:15:02.672 "name": "Malloc3", 00:15:02.672 "num_blocks": 16384, 00:15:02.672 "product_name": "Malloc disk", 00:15:02.672 "supported_io_types": { 00:15:02.672 "abort": true, 00:15:02.672 "compare": false, 00:15:02.672 "compare_and_write": false, 00:15:02.672 "copy": true, 00:15:02.672 "flush": true, 00:15:02.672 "get_zone_info": false, 00:15:02.672 "nvme_admin": false, 00:15:02.672 "nvme_io": false, 00:15:02.672 "nvme_io_md": false, 00:15:02.672 "nvme_iov_md": false, 00:15:02.672 "read": true, 00:15:02.672 "reset": true, 00:15:02.672 "seek_data": false, 00:15:02.672 "seek_hole": false, 00:15:02.672 "unmap": true, 00:15:02.672 "write": true, 00:15:02.672 "write_zeroes": true, 00:15:02.672 "zcopy": true, 00:15:02.672 "zone_append": false, 00:15:02.672 "zone_management": false 00:15:02.672 }, 00:15:02.672 "uuid": "88598e91-f1c9-4800-95d0-577f8c6a59a4", 00:15:02.672 "zoned": false 00:15:02.672 } 00:15:02.672 ]' 00:15:02.672 12:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:15:02.672 12:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:15:02.672 12:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:15:02.672 12:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.672 12:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:02.672 [2024-11-25 12:13:58.353303] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:15:02.672 [2024-11-25 12:13:58.353360] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.672 [2024-11-25 12:13:58.353375] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x53c420 00:15:02.672 [2024-11-25 12:13:58.353382] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.672 [2024-11-25 12:13:58.354806] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.672 [2024-11-25 12:13:58.354836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:15:02.672 Passthru0 00:15:02.672 12:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.672 12:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:15:02.672 12:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.672 12:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:02.672 12:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.672 12:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:15:02.672 { 00:15:02.672 "aliases": [ 00:15:02.672 "88598e91-f1c9-4800-95d0-577f8c6a59a4" 00:15:02.672 ], 00:15:02.672 "assigned_rate_limits": { 00:15:02.672 "r_mbytes_per_sec": 0, 00:15:02.672 "rw_ios_per_sec": 0, 00:15:02.672 "rw_mbytes_per_sec": 0, 00:15:02.672 "w_mbytes_per_sec": 0 00:15:02.672 }, 00:15:02.672 "block_size": 512, 00:15:02.672 "claim_type": "exclusive_write", 00:15:02.672 "claimed": true, 00:15:02.672 "driver_specific": {}, 00:15:02.672 "memory_domains": [ 00:15:02.672 { 00:15:02.672 "dma_device_id": "system", 00:15:02.672 "dma_device_type": 1 00:15:02.672 }, 00:15:02.672 { 00:15:02.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.672 "dma_device_type": 2 00:15:02.672 } 00:15:02.672 ], 00:15:02.672 "name": "Malloc3", 00:15:02.672 "num_blocks": 16384, 00:15:02.672 "product_name": "Malloc disk", 00:15:02.672 "supported_io_types": { 00:15:02.672 "abort": true, 00:15:02.672 "compare": false, 00:15:02.672 "compare_and_write": false, 00:15:02.672 "copy": true, 00:15:02.672 "flush": true, 00:15:02.672 "get_zone_info": false, 00:15:02.672 "nvme_admin": false, 00:15:02.672 "nvme_io": false, 00:15:02.672 "nvme_io_md": false, 00:15:02.672 "nvme_iov_md": false, 00:15:02.672 "read": true, 00:15:02.672 "reset": true, 00:15:02.672 "seek_data": false, 00:15:02.672 "seek_hole": false, 00:15:02.672 "unmap": true, 00:15:02.672 "write": true, 00:15:02.672 "write_zeroes": true, 00:15:02.672 "zcopy": true, 00:15:02.672 "zone_append": false, 00:15:02.672 "zone_management": false 00:15:02.672 }, 00:15:02.672 "uuid": "88598e91-f1c9-4800-95d0-577f8c6a59a4", 00:15:02.672 "zoned": false 00:15:02.672 }, 00:15:02.672 { 00:15:02.672 "aliases": [ 00:15:02.672 "248341e6-827e-5c8b-bfbb-700714d1028f" 00:15:02.672 ], 00:15:02.672 "assigned_rate_limits": { 00:15:02.672 "r_mbytes_per_sec": 0, 00:15:02.672 "rw_ios_per_sec": 0, 00:15:02.672 "rw_mbytes_per_sec": 0, 00:15:02.672 "w_mbytes_per_sec": 0 00:15:02.672 }, 00:15:02.672 "block_size": 512, 00:15:02.672 "claimed": false, 00:15:02.672 "driver_specific": { 00:15:02.672 "passthru": { 00:15:02.672 "base_bdev_name": "Malloc3", 00:15:02.672 "name": "Passthru0" 00:15:02.672 } 00:15:02.672 }, 00:15:02.672 "memory_domains": [ 00:15:02.672 { 00:15:02.672 "dma_device_id": "system", 00:15:02.672 "dma_device_type": 1 00:15:02.672 }, 00:15:02.672 { 00:15:02.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.672 "dma_device_type": 2 00:15:02.672 } 00:15:02.672 ], 00:15:02.672 "name": "Passthru0", 00:15:02.672 "num_blocks": 16384, 00:15:02.672 "product_name": "passthru", 00:15:02.672 "supported_io_types": { 00:15:02.672 "abort": true, 00:15:02.672 "compare": false, 00:15:02.672 "compare_and_write": false, 00:15:02.672 "copy": true, 00:15:02.672 "flush": true, 00:15:02.672 "get_zone_info": false, 00:15:02.672 "nvme_admin": false, 00:15:02.672 "nvme_io": false, 00:15:02.672 "nvme_io_md": false, 00:15:02.672 "nvme_iov_md": false, 00:15:02.672 "read": true, 00:15:02.672 "reset": true, 00:15:02.672 "seek_data": false, 00:15:02.672 "seek_hole": false, 00:15:02.672 "unmap": true, 00:15:02.672 "write": true, 00:15:02.672 "write_zeroes": true, 00:15:02.672 "zcopy": true, 00:15:02.672 "zone_append": false, 00:15:02.672 "zone_management": false 00:15:02.672 }, 00:15:02.672 "uuid": "248341e6-827e-5c8b-bfbb-700714d1028f", 00:15:02.672 "zoned": false 00:15:02.672 } 00:15:02.672 ]' 00:15:02.672 12:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:15:02.673 12:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:15:02.673 12:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:15:02.673 12:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.673 12:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:02.673 12:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.673 12:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:15:02.673 12:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.673 12:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:02.673 12:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.673 12:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:02.673 12:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.673 12:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:02.673 12:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.673 12:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:15:02.673 12:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:15:02.673 12:13:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:15:02.673 00:15:02.673 real 0m0.235s 00:15:02.673 user 0m0.137s 00:15:02.673 sys 0m0.028s 00:15:02.673 12:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:02.673 12:13:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:15:02.673 ************************************ 00:15:02.673 END TEST rpc_daemon_integrity 00:15:02.673 ************************************ 00:15:02.673 12:13:58 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:15:02.673 12:13:58 rpc -- rpc/rpc.sh@84 -- # killprocess 58032 00:15:02.673 12:13:58 rpc -- common/autotest_common.sh@954 -- # '[' -z 58032 ']' 00:15:02.673 12:13:58 rpc -- common/autotest_common.sh@958 -- # kill -0 58032 00:15:02.673 12:13:58 rpc -- common/autotest_common.sh@959 -- # uname 00:15:02.673 12:13:58 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:02.673 12:13:58 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58032 00:15:02.673 killing process with pid 58032 00:15:02.673 12:13:58 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:02.673 12:13:58 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:02.673 12:13:58 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58032' 00:15:02.673 12:13:58 rpc -- common/autotest_common.sh@973 -- # kill 58032 00:15:02.673 12:13:58 rpc -- common/autotest_common.sh@978 -- # wait 58032 00:15:02.930 00:15:02.930 real 0m2.444s 00:15:02.930 user 0m3.027s 00:15:02.930 sys 0m0.599s 00:15:02.930 12:13:58 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:02.930 ************************************ 00:15:02.930 END TEST rpc 00:15:02.930 ************************************ 00:15:02.930 12:13:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:15:03.214 12:13:58 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:15:03.214 12:13:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:03.214 12:13:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:03.214 12:13:58 -- common/autotest_common.sh@10 -- # set +x 00:15:03.214 ************************************ 00:15:03.214 START TEST skip_rpc 00:15:03.214 ************************************ 00:15:03.214 12:13:58 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:15:03.214 * Looking for test storage... 00:15:03.214 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:15:03.214 12:13:58 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:03.214 12:13:58 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:03.214 12:13:58 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:15:03.214 12:13:58 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:03.214 12:13:58 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:03.214 12:13:58 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:03.214 12:13:58 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:03.214 12:13:58 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:15:03.214 12:13:58 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:15:03.214 12:13:58 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:15:03.214 12:13:58 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:15:03.214 12:13:58 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:15:03.214 12:13:58 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:15:03.214 12:13:58 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:15:03.214 12:13:58 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:03.214 12:13:58 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:15:03.214 12:13:58 skip_rpc -- scripts/common.sh@345 -- # : 1 00:15:03.214 12:13:58 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:03.214 12:13:58 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:03.214 12:13:58 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:15:03.214 12:13:58 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:15:03.214 12:13:58 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:03.214 12:13:58 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:15:03.214 12:13:58 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:15:03.214 12:13:58 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:15:03.214 12:13:58 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:15:03.214 12:13:58 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:03.214 12:13:58 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:15:03.214 12:13:58 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:15:03.214 12:13:58 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:03.214 12:13:58 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:03.214 12:13:58 skip_rpc -- scripts/common.sh@368 -- # return 0 00:15:03.214 12:13:58 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:03.214 12:13:58 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:03.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:03.214 --rc genhtml_branch_coverage=1 00:15:03.214 --rc genhtml_function_coverage=1 00:15:03.214 --rc genhtml_legend=1 00:15:03.214 --rc geninfo_all_blocks=1 00:15:03.214 --rc geninfo_unexecuted_blocks=1 00:15:03.214 00:15:03.214 ' 00:15:03.214 12:13:58 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:03.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:03.214 --rc genhtml_branch_coverage=1 00:15:03.214 --rc genhtml_function_coverage=1 00:15:03.214 --rc genhtml_legend=1 00:15:03.214 --rc geninfo_all_blocks=1 00:15:03.214 --rc geninfo_unexecuted_blocks=1 00:15:03.214 00:15:03.214 ' 00:15:03.214 12:13:58 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:03.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:03.214 --rc genhtml_branch_coverage=1 00:15:03.214 --rc genhtml_function_coverage=1 00:15:03.214 --rc genhtml_legend=1 00:15:03.214 --rc geninfo_all_blocks=1 00:15:03.214 --rc geninfo_unexecuted_blocks=1 00:15:03.214 00:15:03.214 ' 00:15:03.214 12:13:58 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:03.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:03.214 --rc genhtml_branch_coverage=1 00:15:03.214 --rc genhtml_function_coverage=1 00:15:03.214 --rc genhtml_legend=1 00:15:03.214 --rc geninfo_all_blocks=1 00:15:03.214 --rc geninfo_unexecuted_blocks=1 00:15:03.214 00:15:03.214 ' 00:15:03.214 12:13:58 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:15:03.214 12:13:58 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:15:03.214 12:13:58 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:15:03.214 12:13:58 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:03.214 12:13:58 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:03.214 12:13:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:03.214 ************************************ 00:15:03.214 START TEST skip_rpc 00:15:03.214 ************************************ 00:15:03.214 12:13:58 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:15:03.214 12:13:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58290 00:15:03.214 12:13:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:15:03.215 12:13:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:15:03.215 12:13:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:15:03.215 [2024-11-25 12:13:59.034084] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:15:03.215 [2024-11-25 12:13:59.034443] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58290 ] 00:15:03.472 [2024-11-25 12:13:59.172675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.472 [2024-11-25 12:13:59.220650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.730 12:14:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:15:08.730 12:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:15:08.730 12:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:15:08.730 12:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:08.730 12:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:08.730 12:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:08.730 12:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:08.730 12:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:15:08.730 12:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.730 12:14:03 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:08.730 2024/11/25 12:14:04 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:15:08.730 12:14:04 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:08.730 12:14:04 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:15:08.730 12:14:04 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:08.730 12:14:04 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:08.730 12:14:04 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:08.730 12:14:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:15:08.730 12:14:04 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58290 00:15:08.730 12:14:04 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 58290 ']' 00:15:08.730 12:14:04 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 58290 00:15:08.730 12:14:04 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:15:08.730 12:14:04 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:08.730 12:14:04 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58290 00:15:08.730 12:14:04 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:08.730 12:14:04 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:08.730 12:14:04 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58290' 00:15:08.730 killing process with pid 58290 00:15:08.730 12:14:04 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 58290 00:15:08.730 12:14:04 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 58290 00:15:08.730 00:15:08.730 real 0m5.224s 00:15:08.730 user 0m4.900s 00:15:08.730 sys 0m0.213s 00:15:08.730 12:14:04 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:08.730 12:14:04 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:08.730 ************************************ 00:15:08.730 END TEST skip_rpc 00:15:08.730 ************************************ 00:15:08.730 12:14:04 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:15:08.730 12:14:04 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:08.730 12:14:04 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:08.730 12:14:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:08.730 ************************************ 00:15:08.730 START TEST skip_rpc_with_json 00:15:08.730 ************************************ 00:15:08.730 12:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:15:08.730 12:14:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:15:08.730 12:14:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58383 00:15:08.730 12:14:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:15:08.730 12:14:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58383 00:15:08.730 12:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58383 ']' 00:15:08.730 12:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.730 12:14:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:15:08.730 12:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:08.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.730 12:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.730 12:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:08.730 12:14:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:15:08.730 [2024-11-25 12:14:04.290333] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:15:08.730 [2024-11-25 12:14:04.290392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58383 ] 00:15:08.730 [2024-11-25 12:14:04.420038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.730 [2024-11-25 12:14:04.464432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:09.296 12:14:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:09.296 12:14:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:15:09.296 12:14:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:15:09.296 12:14:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.296 12:14:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:15:09.554 [2024-11-25 12:14:05.172684] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:15:09.554 request: 00:15:09.554 { 00:15:09.554 "method": "nvmf_get_transports", 00:15:09.554 "params": { 00:15:09.554 "trtype": "tcp" 00:15:09.554 2024/11/25 12:14:05 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:15:09.554 } 00:15:09.554 } 00:15:09.554 Got JSON-RPC error response 00:15:09.554 GoRPCClient: error on JSON-RPC call 00:15:09.554 12:14:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:09.554 12:14:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:15:09.554 12:14:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.554 12:14:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:15:09.554 [2024-11-25 12:14:05.180762] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:09.554 12:14:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.554 12:14:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:15:09.554 12:14:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.554 12:14:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:15:09.554 12:14:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.554 12:14:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:15:09.554 { 00:15:09.554 "subsystems": [ 00:15:09.554 { 00:15:09.554 "subsystem": "fsdev", 00:15:09.554 "config": [ 00:15:09.554 { 00:15:09.554 "method": "fsdev_set_opts", 00:15:09.554 "params": { 00:15:09.554 "fsdev_io_cache_size": 256, 00:15:09.554 "fsdev_io_pool_size": 65535 00:15:09.554 } 00:15:09.554 } 00:15:09.554 ] 00:15:09.554 }, 00:15:09.554 { 00:15:09.554 "subsystem": "keyring", 00:15:09.554 "config": [] 00:15:09.554 }, 00:15:09.554 { 00:15:09.554 "subsystem": "iobuf", 00:15:09.554 "config": [ 00:15:09.554 { 00:15:09.554 "method": "iobuf_set_options", 00:15:09.554 "params": { 00:15:09.554 "enable_numa": false, 00:15:09.554 "large_bufsize": 135168, 00:15:09.554 "large_pool_count": 1024, 00:15:09.554 "small_bufsize": 8192, 00:15:09.554 "small_pool_count": 8192 00:15:09.554 } 00:15:09.554 } 00:15:09.554 ] 00:15:09.554 }, 00:15:09.554 { 00:15:09.554 "subsystem": "sock", 00:15:09.554 "config": [ 00:15:09.554 { 00:15:09.554 "method": "sock_set_default_impl", 00:15:09.554 "params": { 00:15:09.554 "impl_name": "posix" 00:15:09.554 } 00:15:09.554 }, 00:15:09.554 { 00:15:09.554 "method": "sock_impl_set_options", 00:15:09.554 "params": { 00:15:09.554 "enable_ktls": false, 00:15:09.554 "enable_placement_id": 0, 00:15:09.554 "enable_quickack": false, 00:15:09.554 "enable_recv_pipe": true, 00:15:09.554 "enable_zerocopy_send_client": false, 00:15:09.554 "enable_zerocopy_send_server": true, 00:15:09.554 "impl_name": "ssl", 00:15:09.554 "recv_buf_size": 4096, 00:15:09.554 "send_buf_size": 4096, 00:15:09.554 "tls_version": 0, 00:15:09.554 "zerocopy_threshold": 0 00:15:09.554 } 00:15:09.554 }, 00:15:09.554 { 00:15:09.554 "method": "sock_impl_set_options", 00:15:09.554 "params": { 00:15:09.554 "enable_ktls": false, 00:15:09.554 "enable_placement_id": 0, 00:15:09.554 "enable_quickack": false, 00:15:09.554 "enable_recv_pipe": true, 00:15:09.554 "enable_zerocopy_send_client": false, 00:15:09.554 "enable_zerocopy_send_server": true, 00:15:09.554 "impl_name": "posix", 00:15:09.554 "recv_buf_size": 2097152, 00:15:09.554 "send_buf_size": 2097152, 00:15:09.554 "tls_version": 0, 00:15:09.554 "zerocopy_threshold": 0 00:15:09.554 } 00:15:09.554 } 00:15:09.554 ] 00:15:09.554 }, 00:15:09.554 { 00:15:09.554 "subsystem": "vmd", 00:15:09.554 "config": [] 00:15:09.554 }, 00:15:09.554 { 00:15:09.554 "subsystem": "accel", 00:15:09.554 "config": [ 00:15:09.554 { 00:15:09.554 "method": "accel_set_options", 00:15:09.554 "params": { 00:15:09.554 "buf_count": 2048, 00:15:09.554 "large_cache_size": 16, 00:15:09.554 "sequence_count": 2048, 00:15:09.554 "small_cache_size": 128, 00:15:09.554 "task_count": 2048 00:15:09.554 } 00:15:09.554 } 00:15:09.554 ] 00:15:09.554 }, 00:15:09.554 { 00:15:09.554 "subsystem": "bdev", 00:15:09.554 "config": [ 00:15:09.554 { 00:15:09.554 "method": "bdev_set_options", 00:15:09.554 "params": { 00:15:09.554 "bdev_auto_examine": true, 00:15:09.554 "bdev_io_cache_size": 256, 00:15:09.554 "bdev_io_pool_size": 65535, 00:15:09.554 "iobuf_large_cache_size": 16, 00:15:09.554 "iobuf_small_cache_size": 128 00:15:09.554 } 00:15:09.554 }, 00:15:09.554 { 00:15:09.554 "method": "bdev_raid_set_options", 00:15:09.554 "params": { 00:15:09.554 "process_max_bandwidth_mb_sec": 0, 00:15:09.554 "process_window_size_kb": 1024 00:15:09.554 } 00:15:09.554 }, 00:15:09.554 { 00:15:09.554 "method": "bdev_iscsi_set_options", 00:15:09.554 "params": { 00:15:09.554 "timeout_sec": 30 00:15:09.554 } 00:15:09.554 }, 00:15:09.554 { 00:15:09.554 "method": "bdev_nvme_set_options", 00:15:09.554 "params": { 00:15:09.554 "action_on_timeout": "none", 00:15:09.554 "allow_accel_sequence": false, 00:15:09.554 "arbitration_burst": 0, 00:15:09.554 "bdev_retry_count": 3, 00:15:09.554 "ctrlr_loss_timeout_sec": 0, 00:15:09.554 "delay_cmd_submit": true, 00:15:09.554 "dhchap_dhgroups": [ 00:15:09.554 "null", 00:15:09.554 "ffdhe2048", 00:15:09.554 "ffdhe3072", 00:15:09.554 "ffdhe4096", 00:15:09.554 "ffdhe6144", 00:15:09.554 "ffdhe8192" 00:15:09.554 ], 00:15:09.554 "dhchap_digests": [ 00:15:09.554 "sha256", 00:15:09.554 "sha384", 00:15:09.554 "sha512" 00:15:09.554 ], 00:15:09.554 "disable_auto_failback": false, 00:15:09.554 "fast_io_fail_timeout_sec": 0, 00:15:09.554 "generate_uuids": false, 00:15:09.554 "high_priority_weight": 0, 00:15:09.554 "io_path_stat": false, 00:15:09.554 "io_queue_requests": 0, 00:15:09.554 "keep_alive_timeout_ms": 10000, 00:15:09.554 "low_priority_weight": 0, 00:15:09.554 "medium_priority_weight": 0, 00:15:09.554 "nvme_adminq_poll_period_us": 10000, 00:15:09.554 "nvme_error_stat": false, 00:15:09.554 "nvme_ioq_poll_period_us": 0, 00:15:09.554 "rdma_cm_event_timeout_ms": 0, 00:15:09.554 "rdma_max_cq_size": 0, 00:15:09.554 "rdma_srq_size": 0, 00:15:09.554 "reconnect_delay_sec": 0, 00:15:09.554 "timeout_admin_us": 0, 00:15:09.554 "timeout_us": 0, 00:15:09.554 "transport_ack_timeout": 0, 00:15:09.554 "transport_retry_count": 4, 00:15:09.554 "transport_tos": 0 00:15:09.554 } 00:15:09.554 }, 00:15:09.554 { 00:15:09.554 "method": "bdev_nvme_set_hotplug", 00:15:09.554 "params": { 00:15:09.554 "enable": false, 00:15:09.554 "period_us": 100000 00:15:09.554 } 00:15:09.554 }, 00:15:09.554 { 00:15:09.554 "method": "bdev_wait_for_examine" 00:15:09.554 } 00:15:09.554 ] 00:15:09.554 }, 00:15:09.554 { 00:15:09.554 "subsystem": "scsi", 00:15:09.554 "config": null 00:15:09.555 }, 00:15:09.555 { 00:15:09.555 "subsystem": "scheduler", 00:15:09.555 "config": [ 00:15:09.555 { 00:15:09.555 "method": "framework_set_scheduler", 00:15:09.555 "params": { 00:15:09.555 "name": "static" 00:15:09.555 } 00:15:09.555 } 00:15:09.555 ] 00:15:09.555 }, 00:15:09.555 { 00:15:09.555 "subsystem": "vhost_scsi", 00:15:09.555 "config": [] 00:15:09.555 }, 00:15:09.555 { 00:15:09.555 "subsystem": "vhost_blk", 00:15:09.555 "config": [] 00:15:09.555 }, 00:15:09.555 { 00:15:09.555 "subsystem": "ublk", 00:15:09.555 "config": [] 00:15:09.555 }, 00:15:09.555 { 00:15:09.555 "subsystem": "nbd", 00:15:09.555 "config": [] 00:15:09.555 }, 00:15:09.555 { 00:15:09.555 "subsystem": "nvmf", 00:15:09.555 "config": [ 00:15:09.555 { 00:15:09.555 "method": "nvmf_set_config", 00:15:09.555 "params": { 00:15:09.555 "admin_cmd_passthru": { 00:15:09.555 "identify_ctrlr": false 00:15:09.555 }, 00:15:09.555 "dhchap_dhgroups": [ 00:15:09.555 "null", 00:15:09.555 "ffdhe2048", 00:15:09.555 "ffdhe3072", 00:15:09.555 "ffdhe4096", 00:15:09.555 "ffdhe6144", 00:15:09.555 "ffdhe8192" 00:15:09.555 ], 00:15:09.555 "dhchap_digests": [ 00:15:09.555 "sha256", 00:15:09.555 "sha384", 00:15:09.555 "sha512" 00:15:09.555 ], 00:15:09.555 "discovery_filter": "match_any" 00:15:09.555 } 00:15:09.555 }, 00:15:09.555 { 00:15:09.555 "method": "nvmf_set_max_subsystems", 00:15:09.555 "params": { 00:15:09.555 "max_subsystems": 1024 00:15:09.555 } 00:15:09.555 }, 00:15:09.555 { 00:15:09.555 "method": "nvmf_set_crdt", 00:15:09.555 "params": { 00:15:09.555 "crdt1": 0, 00:15:09.555 "crdt2": 0, 00:15:09.555 "crdt3": 0 00:15:09.555 } 00:15:09.555 }, 00:15:09.555 { 00:15:09.555 "method": "nvmf_create_transport", 00:15:09.555 "params": { 00:15:09.555 "abort_timeout_sec": 1, 00:15:09.555 "ack_timeout": 0, 00:15:09.555 "buf_cache_size": 4294967295, 00:15:09.555 "c2h_success": true, 00:15:09.555 "data_wr_pool_size": 0, 00:15:09.555 "dif_insert_or_strip": false, 00:15:09.555 "in_capsule_data_size": 4096, 00:15:09.555 "io_unit_size": 131072, 00:15:09.555 "max_aq_depth": 128, 00:15:09.555 "max_io_qpairs_per_ctrlr": 127, 00:15:09.555 "max_io_size": 131072, 00:15:09.555 "max_queue_depth": 128, 00:15:09.555 "num_shared_buffers": 511, 00:15:09.555 "sock_priority": 0, 00:15:09.555 "trtype": "TCP", 00:15:09.555 "zcopy": false 00:15:09.555 } 00:15:09.555 } 00:15:09.555 ] 00:15:09.555 }, 00:15:09.555 { 00:15:09.555 "subsystem": "iscsi", 00:15:09.555 "config": [ 00:15:09.555 { 00:15:09.555 "method": "iscsi_set_options", 00:15:09.555 "params": { 00:15:09.555 "allow_duplicated_isid": false, 00:15:09.555 "chap_group": 0, 00:15:09.555 "data_out_pool_size": 2048, 00:15:09.555 "default_time2retain": 20, 00:15:09.555 "default_time2wait": 2, 00:15:09.555 "disable_chap": false, 00:15:09.555 "error_recovery_level": 0, 00:15:09.555 "first_burst_length": 8192, 00:15:09.555 "immediate_data": true, 00:15:09.555 "immediate_data_pool_size": 16384, 00:15:09.555 "max_connections_per_session": 2, 00:15:09.555 "max_large_datain_per_connection": 64, 00:15:09.555 "max_queue_depth": 64, 00:15:09.555 "max_r2t_per_connection": 4, 00:15:09.555 "max_sessions": 128, 00:15:09.555 "mutual_chap": false, 00:15:09.555 "node_base": "iqn.2016-06.io.spdk", 00:15:09.555 "nop_in_interval": 30, 00:15:09.555 "nop_timeout": 60, 00:15:09.555 "pdu_pool_size": 36864, 00:15:09.555 "require_chap": false 00:15:09.555 } 00:15:09.555 } 00:15:09.555 ] 00:15:09.555 } 00:15:09.555 ] 00:15:09.555 } 00:15:09.555 12:14:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:15:09.555 12:14:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58383 00:15:09.555 12:14:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58383 ']' 00:15:09.555 12:14:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58383 00:15:09.555 12:14:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:15:09.555 12:14:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:09.555 12:14:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58383 00:15:09.555 12:14:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:09.555 killing process with pid 58383 00:15:09.555 12:14:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:09.555 12:14:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58383' 00:15:09.555 12:14:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58383 00:15:09.555 12:14:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58383 00:15:09.813 12:14:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58422 00:15:09.814 12:14:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:15:09.814 12:14:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:15:15.074 12:14:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58422 00:15:15.074 12:14:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58422 ']' 00:15:15.074 12:14:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58422 00:15:15.074 12:14:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:15:15.074 12:14:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:15.074 12:14:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58422 00:15:15.074 12:14:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:15.074 12:14:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:15.074 killing process with pid 58422 00:15:15.074 12:14:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58422' 00:15:15.074 12:14:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58422 00:15:15.074 12:14:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58422 00:15:15.074 12:14:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:15:15.074 12:14:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:15:15.074 00:15:15.074 real 0m6.532s 00:15:15.074 user 0m6.371s 00:15:15.074 sys 0m0.409s 00:15:15.074 12:14:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:15.074 12:14:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:15:15.074 ************************************ 00:15:15.074 END TEST skip_rpc_with_json 00:15:15.074 ************************************ 00:15:15.074 12:14:10 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:15:15.074 12:14:10 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:15.074 12:14:10 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:15.074 12:14:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.074 ************************************ 00:15:15.074 START TEST skip_rpc_with_delay 00:15:15.074 ************************************ 00:15:15.074 12:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:15:15.074 12:14:10 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:15:15.074 12:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:15:15.074 12:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:15:15.074 12:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:15.074 12:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:15.074 12:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:15.074 12:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:15.074 12:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:15.074 12:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:15.074 12:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:15.074 12:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:15:15.074 12:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:15:15.074 [2024-11-25 12:14:10.869540] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:15:15.075 12:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:15:15.075 12:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:15.075 12:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:15.075 12:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:15.075 00:15:15.075 real 0m0.058s 00:15:15.075 user 0m0.036s 00:15:15.075 sys 0m0.022s 00:15:15.075 12:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:15.075 12:14:10 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:15:15.075 ************************************ 00:15:15.075 END TEST skip_rpc_with_delay 00:15:15.075 ************************************ 00:15:15.075 12:14:10 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:15:15.075 12:14:10 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:15:15.075 12:14:10 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:15:15.075 12:14:10 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:15.075 12:14:10 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:15.075 12:14:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:15.075 ************************************ 00:15:15.075 START TEST exit_on_failed_rpc_init 00:15:15.075 ************************************ 00:15:15.075 12:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:15:15.075 12:14:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58526 00:15:15.075 12:14:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58526 00:15:15.075 12:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58526 ']' 00:15:15.075 12:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.075 12:14:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:15:15.075 12:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:15.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.075 12:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.075 12:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:15.075 12:14:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:15:15.333 [2024-11-25 12:14:10.971451] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:15:15.333 [2024-11-25 12:14:10.971518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58526 ] 00:15:15.333 [2024-11-25 12:14:11.107703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.333 [2024-11-25 12:14:11.143826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.266 12:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:16.266 12:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:15:16.266 12:14:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:15:16.266 12:14:11 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:15:16.266 12:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:15:16.266 12:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:15:16.266 12:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:16.266 12:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:16.266 12:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:16.266 12:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:16.266 12:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:16.266 12:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:16.266 12:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:16.266 12:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:15:16.266 12:14:11 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:15:16.266 [2024-11-25 12:14:11.862693] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:15:16.266 [2024-11-25 12:14:11.862776] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58556 ] 00:15:16.266 [2024-11-25 12:14:11.999545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.266 [2024-11-25 12:14:12.034555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:16.266 [2024-11-25 12:14:12.034618] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:15:16.266 [2024-11-25 12:14:12.034626] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:15:16.266 [2024-11-25 12:14:12.034631] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:16.266 12:14:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:15:16.266 12:14:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:16.266 12:14:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:15:16.266 12:14:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:15:16.266 12:14:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:15:16.266 12:14:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:16.266 12:14:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:16.266 12:14:12 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58526 00:15:16.266 12:14:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58526 ']' 00:15:16.267 12:14:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58526 00:15:16.267 12:14:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:15:16.267 12:14:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:16.267 12:14:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58526 00:15:16.267 12:14:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:16.267 12:14:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:16.267 killing process with pid 58526 00:15:16.267 12:14:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58526' 00:15:16.267 12:14:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58526 00:15:16.267 12:14:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58526 00:15:16.524 00:15:16.524 real 0m1.375s 00:15:16.524 user 0m1.583s 00:15:16.524 sys 0m0.239s 00:15:16.524 12:14:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:16.524 ************************************ 00:15:16.524 END TEST exit_on_failed_rpc_init 00:15:16.524 ************************************ 00:15:16.524 12:14:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:15:16.524 12:14:12 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:15:16.524 00:15:16.524 real 0m13.492s 00:15:16.524 user 0m13.010s 00:15:16.524 sys 0m1.064s 00:15:16.524 12:14:12 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:16.524 12:14:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:16.524 ************************************ 00:15:16.524 END TEST skip_rpc 00:15:16.524 ************************************ 00:15:16.524 12:14:12 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:15:16.524 12:14:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:16.524 12:14:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:16.524 12:14:12 -- common/autotest_common.sh@10 -- # set +x 00:15:16.524 ************************************ 00:15:16.525 START TEST rpc_client 00:15:16.525 ************************************ 00:15:16.525 12:14:12 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:15:16.782 * Looking for test storage... 00:15:16.782 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:15:16.782 12:14:12 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:16.782 12:14:12 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:15:16.782 12:14:12 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:16.782 12:14:12 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:16.782 12:14:12 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:16.782 12:14:12 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:16.782 12:14:12 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:16.782 12:14:12 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:15:16.782 12:14:12 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:15:16.782 12:14:12 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:15:16.782 12:14:12 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:15:16.782 12:14:12 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:15:16.782 12:14:12 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:15:16.782 12:14:12 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:15:16.782 12:14:12 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:16.782 12:14:12 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:15:16.782 12:14:12 rpc_client -- scripts/common.sh@345 -- # : 1 00:15:16.782 12:14:12 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:16.782 12:14:12 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:16.782 12:14:12 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:15:16.782 12:14:12 rpc_client -- scripts/common.sh@353 -- # local d=1 00:15:16.782 12:14:12 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:16.782 12:14:12 rpc_client -- scripts/common.sh@355 -- # echo 1 00:15:16.782 12:14:12 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:15:16.782 12:14:12 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:15:16.782 12:14:12 rpc_client -- scripts/common.sh@353 -- # local d=2 00:15:16.782 12:14:12 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:16.782 12:14:12 rpc_client -- scripts/common.sh@355 -- # echo 2 00:15:16.782 12:14:12 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:15:16.782 12:14:12 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:16.782 12:14:12 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:16.782 12:14:12 rpc_client -- scripts/common.sh@368 -- # return 0 00:15:16.782 12:14:12 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:16.782 12:14:12 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:16.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.782 --rc genhtml_branch_coverage=1 00:15:16.782 --rc genhtml_function_coverage=1 00:15:16.782 --rc genhtml_legend=1 00:15:16.782 --rc geninfo_all_blocks=1 00:15:16.782 --rc geninfo_unexecuted_blocks=1 00:15:16.782 00:15:16.782 ' 00:15:16.782 12:14:12 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:16.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.782 --rc genhtml_branch_coverage=1 00:15:16.782 --rc genhtml_function_coverage=1 00:15:16.782 --rc genhtml_legend=1 00:15:16.782 --rc geninfo_all_blocks=1 00:15:16.782 --rc geninfo_unexecuted_blocks=1 00:15:16.782 00:15:16.782 ' 00:15:16.782 12:14:12 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:16.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.782 --rc genhtml_branch_coverage=1 00:15:16.782 --rc genhtml_function_coverage=1 00:15:16.782 --rc genhtml_legend=1 00:15:16.782 --rc geninfo_all_blocks=1 00:15:16.782 --rc geninfo_unexecuted_blocks=1 00:15:16.782 00:15:16.782 ' 00:15:16.782 12:14:12 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:16.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.782 --rc genhtml_branch_coverage=1 00:15:16.782 --rc genhtml_function_coverage=1 00:15:16.782 --rc genhtml_legend=1 00:15:16.782 --rc geninfo_all_blocks=1 00:15:16.782 --rc geninfo_unexecuted_blocks=1 00:15:16.782 00:15:16.782 ' 00:15:16.782 12:14:12 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:15:16.782 OK 00:15:16.782 12:14:12 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:15:16.782 00:15:16.782 real 0m0.151s 00:15:16.782 user 0m0.097s 00:15:16.782 sys 0m0.063s 00:15:16.782 12:14:12 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:16.782 12:14:12 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:15:16.782 ************************************ 00:15:16.782 END TEST rpc_client 00:15:16.782 ************************************ 00:15:16.782 12:14:12 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:15:16.782 12:14:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:16.782 12:14:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:16.782 12:14:12 -- common/autotest_common.sh@10 -- # set +x 00:15:16.782 ************************************ 00:15:16.782 START TEST json_config 00:15:16.782 ************************************ 00:15:16.782 12:14:12 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:15:16.782 12:14:12 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:16.782 12:14:12 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:15:16.782 12:14:12 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:17.044 12:14:12 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:17.044 12:14:12 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:17.044 12:14:12 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:17.044 12:14:12 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:17.044 12:14:12 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:15:17.044 12:14:12 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:15:17.044 12:14:12 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:15:17.044 12:14:12 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:15:17.044 12:14:12 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:15:17.044 12:14:12 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:15:17.044 12:14:12 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:15:17.044 12:14:12 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:17.044 12:14:12 json_config -- scripts/common.sh@344 -- # case "$op" in 00:15:17.044 12:14:12 json_config -- scripts/common.sh@345 -- # : 1 00:15:17.044 12:14:12 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:17.044 12:14:12 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:17.044 12:14:12 json_config -- scripts/common.sh@365 -- # decimal 1 00:15:17.044 12:14:12 json_config -- scripts/common.sh@353 -- # local d=1 00:15:17.044 12:14:12 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:17.044 12:14:12 json_config -- scripts/common.sh@355 -- # echo 1 00:15:17.044 12:14:12 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:15:17.044 12:14:12 json_config -- scripts/common.sh@366 -- # decimal 2 00:15:17.044 12:14:12 json_config -- scripts/common.sh@353 -- # local d=2 00:15:17.044 12:14:12 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:17.044 12:14:12 json_config -- scripts/common.sh@355 -- # echo 2 00:15:17.044 12:14:12 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:15:17.044 12:14:12 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:17.044 12:14:12 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:17.044 12:14:12 json_config -- scripts/common.sh@368 -- # return 0 00:15:17.044 12:14:12 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:17.044 12:14:12 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:17.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.044 --rc genhtml_branch_coverage=1 00:15:17.044 --rc genhtml_function_coverage=1 00:15:17.044 --rc genhtml_legend=1 00:15:17.044 --rc geninfo_all_blocks=1 00:15:17.044 --rc geninfo_unexecuted_blocks=1 00:15:17.044 00:15:17.044 ' 00:15:17.044 12:14:12 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:17.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.044 --rc genhtml_branch_coverage=1 00:15:17.044 --rc genhtml_function_coverage=1 00:15:17.044 --rc genhtml_legend=1 00:15:17.044 --rc geninfo_all_blocks=1 00:15:17.044 --rc geninfo_unexecuted_blocks=1 00:15:17.044 00:15:17.044 ' 00:15:17.044 12:14:12 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:17.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.044 --rc genhtml_branch_coverage=1 00:15:17.044 --rc genhtml_function_coverage=1 00:15:17.044 --rc genhtml_legend=1 00:15:17.044 --rc geninfo_all_blocks=1 00:15:17.044 --rc geninfo_unexecuted_blocks=1 00:15:17.044 00:15:17.044 ' 00:15:17.044 12:14:12 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:17.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.044 --rc genhtml_branch_coverage=1 00:15:17.044 --rc genhtml_function_coverage=1 00:15:17.044 --rc genhtml_legend=1 00:15:17.044 --rc geninfo_all_blocks=1 00:15:17.044 --rc geninfo_unexecuted_blocks=1 00:15:17.044 00:15:17.044 ' 00:15:17.044 12:14:12 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:17.044 12:14:12 json_config -- nvmf/common.sh@7 -- # uname -s 00:15:17.044 12:14:12 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:17.044 12:14:12 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:17.044 12:14:12 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:17.044 12:14:12 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:17.044 12:14:12 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:17.044 12:14:12 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:17.044 12:14:12 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:17.044 12:14:12 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:17.044 12:14:12 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:17.044 12:14:12 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:17.044 12:14:12 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:15:17.044 12:14:12 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:15:17.044 12:14:12 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:17.044 12:14:12 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:17.044 12:14:12 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:15:17.044 12:14:12 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:17.044 12:14:12 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:17.044 12:14:12 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:15:17.044 12:14:12 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:17.044 12:14:12 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:17.044 12:14:12 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:17.044 12:14:12 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.044 12:14:12 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.044 12:14:12 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.044 12:14:12 json_config -- paths/export.sh@5 -- # export PATH 00:15:17.045 12:14:12 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.045 12:14:12 json_config -- nvmf/common.sh@51 -- # : 0 00:15:17.045 12:14:12 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:17.045 12:14:12 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:17.045 12:14:12 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:17.045 12:14:12 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:17.045 12:14:12 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:17.045 12:14:12 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:17.045 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:17.045 12:14:12 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:17.045 12:14:12 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:17.045 12:14:12 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:17.045 12:14:12 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:15:17.045 12:14:12 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:15:17.045 12:14:12 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:15:17.045 12:14:12 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:15:17.045 12:14:12 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:15:17.045 12:14:12 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:15:17.045 12:14:12 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:15:17.045 12:14:12 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:15:17.045 12:14:12 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:15:17.045 12:14:12 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:15:17.045 12:14:12 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:15:17.045 12:14:12 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:15:17.045 12:14:12 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:15:17.045 12:14:12 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:15:17.045 12:14:12 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:15:17.045 INFO: JSON configuration test init 00:15:17.045 12:14:12 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:15:17.045 12:14:12 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:15:17.045 12:14:12 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:15:17.045 12:14:12 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:17.045 12:14:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:17.045 12:14:12 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:15:17.045 12:14:12 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:17.045 12:14:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:17.045 12:14:12 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:15:17.045 12:14:12 json_config -- json_config/common.sh@9 -- # local app=target 00:15:17.045 12:14:12 json_config -- json_config/common.sh@10 -- # shift 00:15:17.045 12:14:12 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:15:17.045 12:14:12 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:15:17.045 12:14:12 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:15:17.045 12:14:12 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:15:17.045 12:14:12 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:15:17.045 12:14:12 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=58685 00:15:17.045 Waiting for target to run... 00:15:17.045 12:14:12 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:15:17.045 12:14:12 json_config -- json_config/common.sh@25 -- # waitforlisten 58685 /var/tmp/spdk_tgt.sock 00:15:17.045 12:14:12 json_config -- common/autotest_common.sh@835 -- # '[' -z 58685 ']' 00:15:17.045 12:14:12 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:15:17.045 12:14:12 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:17.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:15:17.045 12:14:12 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:15:17.045 12:14:12 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:17.045 12:14:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:17.045 12:14:12 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:15:17.045 [2024-11-25 12:14:12.745617] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:15:17.045 [2024-11-25 12:14:12.745683] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58685 ] 00:15:17.303 [2024-11-25 12:14:13.045935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.303 [2024-11-25 12:14:13.075024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.869 12:14:13 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:17.869 12:14:13 json_config -- common/autotest_common.sh@868 -- # return 0 00:15:17.869 00:15:17.869 12:14:13 json_config -- json_config/common.sh@26 -- # echo '' 00:15:17.869 12:14:13 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:15:17.869 12:14:13 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:15:17.869 12:14:13 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:17.869 12:14:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:17.869 12:14:13 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:15:17.869 12:14:13 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:15:17.869 12:14:13 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:17.869 12:14:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:17.869 12:14:13 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:15:17.869 12:14:13 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:15:17.869 12:14:13 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:15:18.435 12:14:14 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:15:18.435 12:14:14 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:15:18.435 12:14:14 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:18.435 12:14:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:18.435 12:14:14 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:15:18.435 12:14:14 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:15:18.435 12:14:14 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:15:18.435 12:14:14 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:15:18.435 12:14:14 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:15:18.435 12:14:14 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:15:18.435 12:14:14 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:15:18.435 12:14:14 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:15:18.435 12:14:14 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:15:18.435 12:14:14 json_config -- json_config/json_config.sh@51 -- # local get_types 00:15:18.435 12:14:14 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:15:18.435 12:14:14 json_config -- json_config/json_config.sh@54 -- # sort 00:15:18.435 12:14:14 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:15:18.435 12:14:14 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:15:18.435 12:14:14 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:15:18.435 12:14:14 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:15:18.435 12:14:14 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:15:18.435 12:14:14 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:15:18.435 12:14:14 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:18.435 12:14:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:18.693 12:14:14 json_config -- json_config/json_config.sh@62 -- # return 0 00:15:18.693 12:14:14 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:15:18.693 12:14:14 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:15:18.693 12:14:14 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:15:18.693 12:14:14 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:15:18.693 12:14:14 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:15:18.693 12:14:14 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:15:18.693 12:14:14 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:18.693 12:14:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:18.693 12:14:14 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:15:18.693 12:14:14 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:15:18.693 12:14:14 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:15:18.693 12:14:14 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:15:18.693 12:14:14 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:15:18.693 MallocForNvmf0 00:15:18.693 12:14:14 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:15:18.693 12:14:14 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:15:18.951 MallocForNvmf1 00:15:18.951 12:14:14 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:15:18.951 12:14:14 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:15:19.209 [2024-11-25 12:14:14.926018] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:19.209 12:14:14 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:19.210 12:14:14 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:19.467 12:14:15 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:15:19.467 12:14:15 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:15:19.725 12:14:15 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:15:19.725 12:14:15 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:15:19.725 12:14:15 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:15:19.725 12:14:15 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:15:19.982 [2024-11-25 12:14:15.734351] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:15:19.982 12:14:15 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:15:19.982 12:14:15 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:19.982 12:14:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:19.982 12:14:15 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:15:19.982 12:14:15 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:19.982 12:14:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:19.982 12:14:15 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:15:19.982 12:14:15 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:15:19.982 12:14:15 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:15:20.239 MallocBdevForConfigChangeCheck 00:15:20.239 12:14:16 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:15:20.239 12:14:16 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:20.239 12:14:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:20.239 12:14:16 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:15:20.239 12:14:16 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:15:20.802 12:14:16 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:15:20.802 INFO: shutting down applications... 00:15:20.802 12:14:16 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:15:20.802 12:14:16 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:15:20.802 12:14:16 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:15:20.802 12:14:16 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:15:20.802 Calling clear_iscsi_subsystem 00:15:20.802 Calling clear_nvmf_subsystem 00:15:20.803 Calling clear_nbd_subsystem 00:15:20.803 Calling clear_ublk_subsystem 00:15:20.803 Calling clear_vhost_blk_subsystem 00:15:20.803 Calling clear_vhost_scsi_subsystem 00:15:20.803 Calling clear_bdev_subsystem 00:15:21.060 12:14:16 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:15:21.060 12:14:16 json_config -- json_config/json_config.sh@350 -- # count=100 00:15:21.060 12:14:16 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:15:21.060 12:14:16 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:15:21.060 12:14:16 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:15:21.060 12:14:16 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:15:21.317 12:14:17 json_config -- json_config/json_config.sh@352 -- # break 00:15:21.317 12:14:17 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:15:21.317 12:14:17 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:15:21.317 12:14:17 json_config -- json_config/common.sh@31 -- # local app=target 00:15:21.317 12:14:17 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:15:21.317 12:14:17 json_config -- json_config/common.sh@35 -- # [[ -n 58685 ]] 00:15:21.317 12:14:17 json_config -- json_config/common.sh@38 -- # kill -SIGINT 58685 00:15:21.317 12:14:17 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:15:21.317 12:14:17 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:21.318 12:14:17 json_config -- json_config/common.sh@41 -- # kill -0 58685 00:15:21.318 12:14:17 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:15:21.883 12:14:17 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:15:21.883 12:14:17 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:21.883 12:14:17 json_config -- json_config/common.sh@41 -- # kill -0 58685 00:15:21.883 12:14:17 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:15:21.883 12:14:17 json_config -- json_config/common.sh@43 -- # break 00:15:21.883 12:14:17 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:15:21.883 SPDK target shutdown done 00:15:21.883 12:14:17 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:15:21.883 INFO: relaunching applications... 00:15:21.883 12:14:17 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:15:21.883 12:14:17 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:15:21.883 12:14:17 json_config -- json_config/common.sh@9 -- # local app=target 00:15:21.883 12:14:17 json_config -- json_config/common.sh@10 -- # shift 00:15:21.883 12:14:17 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:15:21.883 12:14:17 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:15:21.883 12:14:17 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:15:21.883 12:14:17 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:15:21.883 12:14:17 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:15:21.883 12:14:17 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=58959 00:15:21.883 Waiting for target to run... 00:15:21.883 12:14:17 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:15:21.883 12:14:17 json_config -- json_config/common.sh@25 -- # waitforlisten 58959 /var/tmp/spdk_tgt.sock 00:15:21.883 12:14:17 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:15:21.883 12:14:17 json_config -- common/autotest_common.sh@835 -- # '[' -z 58959 ']' 00:15:21.883 12:14:17 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:15:21.883 12:14:17 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:21.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:15:21.883 12:14:17 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:15:21.883 12:14:17 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:21.883 12:14:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:21.883 [2024-11-25 12:14:17.602172] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:15:21.883 [2024-11-25 12:14:17.602238] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58959 ] 00:15:22.188 [2024-11-25 12:14:17.902387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.188 [2024-11-25 12:14:17.930383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.445 [2024-11-25 12:14:18.266065] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:22.445 [2024-11-25 12:14:18.298172] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:15:22.704 12:14:18 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:22.704 00:15:22.704 12:14:18 json_config -- common/autotest_common.sh@868 -- # return 0 00:15:22.704 12:14:18 json_config -- json_config/common.sh@26 -- # echo '' 00:15:22.704 12:14:18 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:15:22.704 INFO: Checking if target configuration is the same... 00:15:22.704 12:14:18 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:15:22.704 12:14:18 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:15:22.704 12:14:18 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:15:22.704 12:14:18 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:15:22.704 + '[' 2 -ne 2 ']' 00:15:22.704 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:15:22.704 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:15:22.704 + rootdir=/home/vagrant/spdk_repo/spdk 00:15:22.704 +++ basename /dev/fd/62 00:15:22.704 ++ mktemp /tmp/62.XXX 00:15:22.704 + tmp_file_1=/tmp/62.Yll 00:15:22.704 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:15:22.704 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:15:22.704 + tmp_file_2=/tmp/spdk_tgt_config.json.vMe 00:15:22.704 + ret=0 00:15:22.704 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:15:22.964 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:15:23.222 + diff -u /tmp/62.Yll /tmp/spdk_tgt_config.json.vMe 00:15:23.222 + echo 'INFO: JSON config files are the same' 00:15:23.222 INFO: JSON config files are the same 00:15:23.222 + rm /tmp/62.Yll /tmp/spdk_tgt_config.json.vMe 00:15:23.222 + exit 0 00:15:23.222 12:14:18 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:15:23.222 INFO: changing configuration and checking if this can be detected... 00:15:23.222 12:14:18 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:15:23.222 12:14:18 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:15:23.222 12:14:18 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:15:23.222 12:14:19 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:15:23.222 12:14:19 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:15:23.222 12:14:19 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:15:23.222 + '[' 2 -ne 2 ']' 00:15:23.222 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:15:23.222 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:15:23.222 + rootdir=/home/vagrant/spdk_repo/spdk 00:15:23.222 +++ basename /dev/fd/62 00:15:23.492 ++ mktemp /tmp/62.XXX 00:15:23.492 + tmp_file_1=/tmp/62.Ekq 00:15:23.492 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:15:23.492 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:15:23.492 + tmp_file_2=/tmp/spdk_tgt_config.json.9HZ 00:15:23.492 + ret=0 00:15:23.493 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:15:23.754 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:15:23.754 + diff -u /tmp/62.Ekq /tmp/spdk_tgt_config.json.9HZ 00:15:23.754 + ret=1 00:15:23.754 + echo '=== Start of file: /tmp/62.Ekq ===' 00:15:23.754 + cat /tmp/62.Ekq 00:15:23.754 + echo '=== End of file: /tmp/62.Ekq ===' 00:15:23.754 + echo '' 00:15:23.754 + echo '=== Start of file: /tmp/spdk_tgt_config.json.9HZ ===' 00:15:23.754 + cat /tmp/spdk_tgt_config.json.9HZ 00:15:23.754 + echo '=== End of file: /tmp/spdk_tgt_config.json.9HZ ===' 00:15:23.754 + echo '' 00:15:23.754 + rm /tmp/62.Ekq /tmp/spdk_tgt_config.json.9HZ 00:15:23.754 + exit 1 00:15:23.754 INFO: configuration change detected. 00:15:23.754 12:14:19 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:15:23.754 12:14:19 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:15:23.754 12:14:19 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:15:23.754 12:14:19 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:23.754 12:14:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:23.754 12:14:19 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:15:23.754 12:14:19 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:15:23.754 12:14:19 json_config -- json_config/json_config.sh@324 -- # [[ -n 58959 ]] 00:15:23.754 12:14:19 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:15:23.754 12:14:19 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:15:23.754 12:14:19 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:23.754 12:14:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:23.754 12:14:19 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:15:23.754 12:14:19 json_config -- json_config/json_config.sh@200 -- # uname -s 00:15:23.755 12:14:19 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:15:23.755 12:14:19 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:15:23.755 12:14:19 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:15:23.755 12:14:19 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:15:23.755 12:14:19 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:23.755 12:14:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:23.755 12:14:19 json_config -- json_config/json_config.sh@330 -- # killprocess 58959 00:15:23.755 12:14:19 json_config -- common/autotest_common.sh@954 -- # '[' -z 58959 ']' 00:15:23.755 12:14:19 json_config -- common/autotest_common.sh@958 -- # kill -0 58959 00:15:23.755 12:14:19 json_config -- common/autotest_common.sh@959 -- # uname 00:15:23.755 12:14:19 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:23.755 12:14:19 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58959 00:15:23.755 12:14:19 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:23.755 killing process with pid 58959 00:15:23.755 12:14:19 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:23.755 12:14:19 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58959' 00:15:23.755 12:14:19 json_config -- common/autotest_common.sh@973 -- # kill 58959 00:15:23.755 12:14:19 json_config -- common/autotest_common.sh@978 -- # wait 58959 00:15:24.015 12:14:19 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:15:24.015 12:14:19 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:15:24.015 12:14:19 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:24.015 12:14:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:24.015 INFO: Success 00:15:24.015 12:14:19 json_config -- json_config/json_config.sh@335 -- # return 0 00:15:24.015 12:14:19 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:15:24.015 00:15:24.015 real 0m7.156s 00:15:24.015 user 0m9.971s 00:15:24.015 sys 0m1.233s 00:15:24.015 12:14:19 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:24.015 ************************************ 00:15:24.015 END TEST json_config 00:15:24.015 ************************************ 00:15:24.015 12:14:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:15:24.015 12:14:19 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:15:24.015 12:14:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:24.015 12:14:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:24.015 12:14:19 -- common/autotest_common.sh@10 -- # set +x 00:15:24.015 ************************************ 00:15:24.015 START TEST json_config_extra_key 00:15:24.015 ************************************ 00:15:24.015 12:14:19 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:15:24.015 12:14:19 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:24.015 12:14:19 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:15:24.015 12:14:19 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:24.015 12:14:19 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:24.015 12:14:19 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:24.015 12:14:19 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:24.015 12:14:19 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:24.015 12:14:19 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:15:24.015 12:14:19 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:15:24.015 12:14:19 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:15:24.015 12:14:19 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:15:24.015 12:14:19 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:15:24.015 12:14:19 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:15:24.015 12:14:19 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:15:24.015 12:14:19 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:24.015 12:14:19 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:15:24.015 12:14:19 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:15:24.015 12:14:19 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:24.015 12:14:19 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:24.015 12:14:19 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:15:24.015 12:14:19 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:15:24.015 12:14:19 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:24.015 12:14:19 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:15:24.015 12:14:19 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:15:24.015 12:14:19 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:15:24.015 12:14:19 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:15:24.015 12:14:19 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:24.015 12:14:19 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:15:24.015 12:14:19 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:15:24.015 12:14:19 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:24.015 12:14:19 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:24.015 12:14:19 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:15:24.015 12:14:19 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:24.015 12:14:19 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:24.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.015 --rc genhtml_branch_coverage=1 00:15:24.015 --rc genhtml_function_coverage=1 00:15:24.015 --rc genhtml_legend=1 00:15:24.015 --rc geninfo_all_blocks=1 00:15:24.015 --rc geninfo_unexecuted_blocks=1 00:15:24.015 00:15:24.015 ' 00:15:24.015 12:14:19 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:24.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.015 --rc genhtml_branch_coverage=1 00:15:24.015 --rc genhtml_function_coverage=1 00:15:24.015 --rc genhtml_legend=1 00:15:24.015 --rc geninfo_all_blocks=1 00:15:24.015 --rc geninfo_unexecuted_blocks=1 00:15:24.015 00:15:24.015 ' 00:15:24.015 12:14:19 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:24.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.015 --rc genhtml_branch_coverage=1 00:15:24.015 --rc genhtml_function_coverage=1 00:15:24.015 --rc genhtml_legend=1 00:15:24.015 --rc geninfo_all_blocks=1 00:15:24.015 --rc geninfo_unexecuted_blocks=1 00:15:24.015 00:15:24.015 ' 00:15:24.015 12:14:19 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:24.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.015 --rc genhtml_branch_coverage=1 00:15:24.015 --rc genhtml_function_coverage=1 00:15:24.015 --rc genhtml_legend=1 00:15:24.015 --rc geninfo_all_blocks=1 00:15:24.015 --rc geninfo_unexecuted_blocks=1 00:15:24.015 00:15:24.015 ' 00:15:24.015 12:14:19 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:24.015 12:14:19 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:15:24.015 12:14:19 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:24.015 12:14:19 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:24.015 12:14:19 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:24.015 12:14:19 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:24.015 12:14:19 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:24.015 12:14:19 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:24.015 12:14:19 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:24.015 12:14:19 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:24.015 12:14:19 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:24.015 12:14:19 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:24.015 12:14:19 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:15:24.015 12:14:19 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:15:24.015 12:14:19 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:24.015 12:14:19 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:24.015 12:14:19 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:15:24.015 12:14:19 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:24.016 12:14:19 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:24.016 12:14:19 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:15:24.016 12:14:19 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:24.016 12:14:19 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:24.016 12:14:19 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:24.016 12:14:19 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.016 12:14:19 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.016 12:14:19 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.016 12:14:19 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:15:24.016 12:14:19 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.016 12:14:19 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:15:24.016 12:14:19 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:24.016 12:14:19 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:24.016 12:14:19 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:24.016 12:14:19 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:24.016 12:14:19 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:24.016 12:14:19 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:24.016 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:24.016 12:14:19 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:24.016 12:14:19 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:24.016 12:14:19 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:24.275 12:14:19 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:15:24.275 12:14:19 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:15:24.275 12:14:19 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:15:24.275 12:14:19 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:15:24.275 12:14:19 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:15:24.275 12:14:19 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:15:24.275 12:14:19 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:15:24.275 12:14:19 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:15:24.275 12:14:19 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:15:24.275 12:14:19 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:15:24.275 INFO: launching applications... 00:15:24.275 12:14:19 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:15:24.275 12:14:19 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:15:24.275 12:14:19 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:15:24.275 12:14:19 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:15:24.275 12:14:19 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:15:24.275 12:14:19 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:15:24.275 12:14:19 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:15:24.275 12:14:19 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:15:24.275 12:14:19 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:15:24.275 12:14:19 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59132 00:15:24.275 Waiting for target to run... 00:15:24.275 12:14:19 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:15:24.275 12:14:19 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59132 /var/tmp/spdk_tgt.sock 00:15:24.275 12:14:19 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 59132 ']' 00:15:24.275 12:14:19 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:15:24.275 12:14:19 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:24.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:15:24.275 12:14:19 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:15:24.275 12:14:19 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:24.275 12:14:19 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:15:24.275 12:14:19 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:15:24.275 [2024-11-25 12:14:19.933921] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:15:24.275 [2024-11-25 12:14:19.933977] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59132 ] 00:15:24.534 [2024-11-25 12:14:20.237871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.534 [2024-11-25 12:14:20.267576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.100 12:14:20 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:25.100 00:15:25.100 12:14:20 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:15:25.100 12:14:20 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:15:25.100 INFO: shutting down applications... 00:15:25.100 12:14:20 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:15:25.100 12:14:20 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:15:25.100 12:14:20 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:15:25.100 12:14:20 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:15:25.100 12:14:20 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59132 ]] 00:15:25.100 12:14:20 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59132 00:15:25.100 12:14:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:15:25.100 12:14:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:25.100 12:14:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59132 00:15:25.100 12:14:20 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:15:25.667 12:14:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:15:25.667 12:14:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:15:25.667 12:14:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59132 00:15:25.667 12:14:21 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:15:25.667 12:14:21 json_config_extra_key -- json_config/common.sh@43 -- # break 00:15:25.667 12:14:21 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:15:25.667 SPDK target shutdown done 00:15:25.667 12:14:21 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:15:25.667 Success 00:15:25.667 12:14:21 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:15:25.667 00:15:25.667 real 0m1.566s 00:15:25.667 user 0m1.265s 00:15:25.667 sys 0m0.313s 00:15:25.667 12:14:21 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:25.667 12:14:21 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:15:25.667 ************************************ 00:15:25.667 END TEST json_config_extra_key 00:15:25.667 ************************************ 00:15:25.667 12:14:21 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:15:25.667 12:14:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:25.667 12:14:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:25.667 12:14:21 -- common/autotest_common.sh@10 -- # set +x 00:15:25.667 ************************************ 00:15:25.667 START TEST alias_rpc 00:15:25.667 ************************************ 00:15:25.667 12:14:21 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:15:25.667 * Looking for test storage... 00:15:25.667 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:15:25.667 12:14:21 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:25.667 12:14:21 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:15:25.667 12:14:21 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:25.667 12:14:21 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:25.667 12:14:21 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:25.667 12:14:21 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:25.667 12:14:21 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:25.667 12:14:21 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:15:25.667 12:14:21 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:15:25.667 12:14:21 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:15:25.667 12:14:21 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:15:25.667 12:14:21 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:15:25.667 12:14:21 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:15:25.667 12:14:21 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:15:25.667 12:14:21 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:25.667 12:14:21 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:15:25.667 12:14:21 alias_rpc -- scripts/common.sh@345 -- # : 1 00:15:25.667 12:14:21 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:25.667 12:14:21 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:25.667 12:14:21 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:15:25.667 12:14:21 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:15:25.667 12:14:21 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:25.667 12:14:21 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:15:25.667 12:14:21 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:15:25.667 12:14:21 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:15:25.667 12:14:21 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:15:25.667 12:14:21 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:25.667 12:14:21 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:15:25.667 12:14:21 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:15:25.667 12:14:21 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:25.667 12:14:21 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:25.668 12:14:21 alias_rpc -- scripts/common.sh@368 -- # return 0 00:15:25.668 12:14:21 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:25.668 12:14:21 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:25.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.668 --rc genhtml_branch_coverage=1 00:15:25.668 --rc genhtml_function_coverage=1 00:15:25.668 --rc genhtml_legend=1 00:15:25.668 --rc geninfo_all_blocks=1 00:15:25.668 --rc geninfo_unexecuted_blocks=1 00:15:25.668 00:15:25.668 ' 00:15:25.668 12:14:21 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:25.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.668 --rc genhtml_branch_coverage=1 00:15:25.668 --rc genhtml_function_coverage=1 00:15:25.668 --rc genhtml_legend=1 00:15:25.668 --rc geninfo_all_blocks=1 00:15:25.668 --rc geninfo_unexecuted_blocks=1 00:15:25.668 00:15:25.668 ' 00:15:25.668 12:14:21 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:25.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.668 --rc genhtml_branch_coverage=1 00:15:25.668 --rc genhtml_function_coverage=1 00:15:25.668 --rc genhtml_legend=1 00:15:25.668 --rc geninfo_all_blocks=1 00:15:25.668 --rc geninfo_unexecuted_blocks=1 00:15:25.668 00:15:25.668 ' 00:15:25.668 12:14:21 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:25.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.668 --rc genhtml_branch_coverage=1 00:15:25.668 --rc genhtml_function_coverage=1 00:15:25.668 --rc genhtml_legend=1 00:15:25.668 --rc geninfo_all_blocks=1 00:15:25.668 --rc geninfo_unexecuted_blocks=1 00:15:25.668 00:15:25.668 ' 00:15:25.668 12:14:21 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:15:25.668 12:14:21 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59222 00:15:25.668 12:14:21 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59222 00:15:25.668 12:14:21 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 59222 ']' 00:15:25.668 12:14:21 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.668 12:14:21 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:25.668 12:14:21 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:25.668 12:14:21 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.668 12:14:21 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:25.668 12:14:21 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:25.927 [2024-11-25 12:14:21.555868] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:15:25.927 [2024-11-25 12:14:21.555933] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59222 ] 00:15:25.927 [2024-11-25 12:14:21.697530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.927 [2024-11-25 12:14:21.734827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.867 12:14:22 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:26.867 12:14:22 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:26.867 12:14:22 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:15:26.867 12:14:22 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59222 00:15:26.867 12:14:22 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 59222 ']' 00:15:26.867 12:14:22 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 59222 00:15:26.867 12:14:22 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:15:26.867 12:14:22 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:26.867 12:14:22 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59222 00:15:26.867 killing process with pid 59222 00:15:26.867 12:14:22 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:26.867 12:14:22 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:26.867 12:14:22 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59222' 00:15:26.867 12:14:22 alias_rpc -- common/autotest_common.sh@973 -- # kill 59222 00:15:26.867 12:14:22 alias_rpc -- common/autotest_common.sh@978 -- # wait 59222 00:15:27.126 00:15:27.126 real 0m1.519s 00:15:27.126 user 0m1.747s 00:15:27.126 sys 0m0.300s 00:15:27.126 12:14:22 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:27.126 12:14:22 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:27.126 ************************************ 00:15:27.126 END TEST alias_rpc 00:15:27.126 ************************************ 00:15:27.126 12:14:22 -- spdk/autotest.sh@163 -- # [[ 1 -eq 0 ]] 00:15:27.126 12:14:22 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:15:27.126 12:14:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:27.126 12:14:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:27.126 12:14:22 -- common/autotest_common.sh@10 -- # set +x 00:15:27.126 ************************************ 00:15:27.126 START TEST dpdk_mem_utility 00:15:27.126 ************************************ 00:15:27.126 12:14:22 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:15:27.126 * Looking for test storage... 00:15:27.126 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:15:27.126 12:14:22 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:27.126 12:14:22 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:15:27.126 12:14:22 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:27.383 12:14:23 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:27.383 12:14:23 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:27.383 12:14:23 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:27.383 12:14:23 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:27.383 12:14:23 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:15:27.383 12:14:23 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:15:27.383 12:14:23 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:15:27.383 12:14:23 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:15:27.384 12:14:23 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:15:27.384 12:14:23 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:15:27.384 12:14:23 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:15:27.384 12:14:23 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:27.384 12:14:23 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:15:27.384 12:14:23 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:15:27.384 12:14:23 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:27.384 12:14:23 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:27.384 12:14:23 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:15:27.384 12:14:23 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:15:27.384 12:14:23 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:27.384 12:14:23 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:15:27.384 12:14:23 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:15:27.384 12:14:23 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:15:27.384 12:14:23 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:15:27.384 12:14:23 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:27.384 12:14:23 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:15:27.384 12:14:23 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:15:27.384 12:14:23 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:27.384 12:14:23 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:27.384 12:14:23 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:15:27.384 12:14:23 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:27.384 12:14:23 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:27.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.384 --rc genhtml_branch_coverage=1 00:15:27.384 --rc genhtml_function_coverage=1 00:15:27.384 --rc genhtml_legend=1 00:15:27.384 --rc geninfo_all_blocks=1 00:15:27.384 --rc geninfo_unexecuted_blocks=1 00:15:27.384 00:15:27.384 ' 00:15:27.384 12:14:23 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:27.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.384 --rc genhtml_branch_coverage=1 00:15:27.384 --rc genhtml_function_coverage=1 00:15:27.384 --rc genhtml_legend=1 00:15:27.384 --rc geninfo_all_blocks=1 00:15:27.384 --rc geninfo_unexecuted_blocks=1 00:15:27.384 00:15:27.384 ' 00:15:27.384 12:14:23 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:27.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.384 --rc genhtml_branch_coverage=1 00:15:27.384 --rc genhtml_function_coverage=1 00:15:27.384 --rc genhtml_legend=1 00:15:27.384 --rc geninfo_all_blocks=1 00:15:27.384 --rc geninfo_unexecuted_blocks=1 00:15:27.384 00:15:27.384 ' 00:15:27.384 12:14:23 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:27.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.384 --rc genhtml_branch_coverage=1 00:15:27.384 --rc genhtml_function_coverage=1 00:15:27.384 --rc genhtml_legend=1 00:15:27.384 --rc geninfo_all_blocks=1 00:15:27.384 --rc geninfo_unexecuted_blocks=1 00:15:27.384 00:15:27.384 ' 00:15:27.384 12:14:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:15:27.384 12:14:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59316 00:15:27.384 12:14:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:27.384 12:14:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59316 00:15:27.384 12:14:23 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 59316 ']' 00:15:27.384 12:14:23 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:27.384 12:14:23 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:27.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:27.384 12:14:23 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:27.384 12:14:23 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:27.384 12:14:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:15:27.384 [2024-11-25 12:14:23.106204] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:15:27.384 [2024-11-25 12:14:23.106265] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59316 ] 00:15:27.384 [2024-11-25 12:14:23.245456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.641 [2024-11-25 12:14:23.281330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.206 12:14:23 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:28.206 12:14:23 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:15:28.206 12:14:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:15:28.206 12:14:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:15:28.206 12:14:23 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.206 12:14:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:15:28.206 { 00:15:28.206 "filename": "/tmp/spdk_mem_dump.txt" 00:15:28.206 } 00:15:28.206 12:14:24 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.206 12:14:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:15:28.206 DPDK memory size 810.000000 MiB in 1 heap(s) 00:15:28.206 1 heaps totaling size 810.000000 MiB 00:15:28.206 size: 810.000000 MiB heap id: 0 00:15:28.206 end heaps---------- 00:15:28.206 9 mempools totaling size 595.772034 MiB 00:15:28.206 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:15:28.206 size: 158.602051 MiB name: PDU_data_out_Pool 00:15:28.206 size: 92.545471 MiB name: bdev_io_59316 00:15:28.206 size: 50.003479 MiB name: msgpool_59316 00:15:28.206 size: 36.509338 MiB name: fsdev_io_59316 00:15:28.206 size: 21.763794 MiB name: PDU_Pool 00:15:28.206 size: 19.513306 MiB name: SCSI_TASK_Pool 00:15:28.206 size: 4.133484 MiB name: evtpool_59316 00:15:28.206 size: 0.026123 MiB name: Session_Pool 00:15:28.206 end mempools------- 00:15:28.206 6 memzones totaling size 4.142822 MiB 00:15:28.206 size: 1.000366 MiB name: RG_ring_0_59316 00:15:28.206 size: 1.000366 MiB name: RG_ring_1_59316 00:15:28.206 size: 1.000366 MiB name: RG_ring_4_59316 00:15:28.206 size: 1.000366 MiB name: RG_ring_5_59316 00:15:28.206 size: 0.125366 MiB name: RG_ring_2_59316 00:15:28.206 size: 0.015991 MiB name: RG_ring_3_59316 00:15:28.206 end memzones------- 00:15:28.206 12:14:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:15:28.466 heap id: 0 total size: 810.000000 MiB number of busy elements: 237 number of free elements: 15 00:15:28.466 list of free elements. size: 10.827148 MiB 00:15:28.466 element at address: 0x200018a00000 with size: 0.999878 MiB 00:15:28.466 element at address: 0x200018c00000 with size: 0.999878 MiB 00:15:28.466 element at address: 0x200000400000 with size: 0.996338 MiB 00:15:28.466 element at address: 0x200031800000 with size: 0.994446 MiB 00:15:28.466 element at address: 0x200006400000 with size: 0.959839 MiB 00:15:28.466 element at address: 0x200012c00000 with size: 0.954285 MiB 00:15:28.466 element at address: 0x200018e00000 with size: 0.936584 MiB 00:15:28.466 element at address: 0x200000200000 with size: 0.717346 MiB 00:15:28.466 element at address: 0x20001a600000 with size: 0.570801 MiB 00:15:28.466 element at address: 0x200000c00000 with size: 0.491028 MiB 00:15:28.466 element at address: 0x20000a600000 with size: 0.489441 MiB 00:15:28.466 element at address: 0x200019000000 with size: 0.485657 MiB 00:15:28.466 element at address: 0x200003e00000 with size: 0.481018 MiB 00:15:28.466 element at address: 0x200027a00000 with size: 0.397217 MiB 00:15:28.466 element at address: 0x200000800000 with size: 0.353394 MiB 00:15:28.466 list of standard malloc elements. size: 199.253967 MiB 00:15:28.466 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:15:28.466 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:15:28.466 element at address: 0x200018afff80 with size: 1.000122 MiB 00:15:28.466 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:15:28.466 element at address: 0x200018efff80 with size: 1.000122 MiB 00:15:28.466 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:15:28.466 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:15:28.466 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:15:28.466 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:15:28.466 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:15:28.466 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:15:28.466 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:15:28.466 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:15:28.466 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:15:28.466 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:15:28.466 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:15:28.466 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:15:28.466 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:15:28.466 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:15:28.466 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:15:28.466 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:15:28.466 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:15:28.466 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:15:28.466 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:15:28.466 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:15:28.466 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:15:28.466 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:15:28.466 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:15:28.466 element at address: 0x20000085a780 with size: 0.000183 MiB 00:15:28.466 element at address: 0x20000085a980 with size: 0.000183 MiB 00:15:28.466 element at address: 0x20000085ec40 with size: 0.000183 MiB 00:15:28.466 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:15:28.466 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:15:28.466 element at address: 0x20000087f080 with size: 0.000183 MiB 00:15:28.466 element at address: 0x20000087f140 with size: 0.000183 MiB 00:15:28.466 element at address: 0x20000087f200 with size: 0.000183 MiB 00:15:28.466 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:15:28.466 element at address: 0x20000087f380 with size: 0.000183 MiB 00:15:28.466 element at address: 0x20000087f440 with size: 0.000183 MiB 00:15:28.466 element at address: 0x20000087f500 with size: 0.000183 MiB 00:15:28.466 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:15:28.466 element at address: 0x20000087f680 with size: 0.000183 MiB 00:15:28.466 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:15:28.466 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:15:28.466 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:15:28.466 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:15:28.466 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:15:28.466 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:15:28.466 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:15:28.466 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:15:28.466 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:15:28.466 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:15:28.466 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:15:28.466 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:15:28.466 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:15:28.466 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:15:28.466 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:15:28.466 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:15:28.466 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:15:28.466 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:15:28.466 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:15:28.466 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:15:28.466 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:15:28.466 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:15:28.466 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:15:28.466 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:15:28.466 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:15:28.466 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:15:28.466 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:15:28.466 element at address: 0x200000cff000 with size: 0.000183 MiB 00:15:28.466 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:15:28.466 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:15:28.466 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:15:28.466 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:15:28.466 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:15:28.466 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:15:28.466 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:15:28.466 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:15:28.466 element at address: 0x200003efb980 with size: 0.000183 MiB 00:15:28.466 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:15:28.466 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:15:28.466 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:15:28.466 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:15:28.466 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:15:28.466 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:15:28.466 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:15:28.466 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:15:28.466 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:15:28.466 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:15:28.466 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:15:28.466 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:15:28.466 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:15:28.466 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:15:28.466 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:15:28.466 element at address: 0x20001a692200 with size: 0.000183 MiB 00:15:28.466 element at address: 0x20001a6922c0 with size: 0.000183 MiB 00:15:28.466 element at address: 0x20001a692380 with size: 0.000183 MiB 00:15:28.466 element at address: 0x20001a692440 with size: 0.000183 MiB 00:15:28.466 element at address: 0x20001a692500 with size: 0.000183 MiB 00:15:28.466 element at address: 0x20001a6925c0 with size: 0.000183 MiB 00:15:28.466 element at address: 0x20001a692680 with size: 0.000183 MiB 00:15:28.466 element at address: 0x20001a692740 with size: 0.000183 MiB 00:15:28.466 element at address: 0x20001a692800 with size: 0.000183 MiB 00:15:28.466 element at address: 0x20001a6928c0 with size: 0.000183 MiB 00:15:28.466 element at address: 0x20001a692980 with size: 0.000183 MiB 00:15:28.466 element at address: 0x20001a692a40 with size: 0.000183 MiB 00:15:28.466 element at address: 0x20001a692b00 with size: 0.000183 MiB 00:15:28.466 element at address: 0x20001a692bc0 with size: 0.000183 MiB 00:15:28.466 element at address: 0x20001a692c80 with size: 0.000183 MiB 00:15:28.466 element at address: 0x20001a692d40 with size: 0.000183 MiB 00:15:28.466 element at address: 0x20001a692e00 with size: 0.000183 MiB 00:15:28.466 element at address: 0x20001a692ec0 with size: 0.000183 MiB 00:15:28.466 element at address: 0x20001a692f80 with size: 0.000183 MiB 00:15:28.466 element at address: 0x20001a693040 with size: 0.000183 MiB 00:15:28.467 element at address: 0x20001a693100 with size: 0.000183 MiB 00:15:28.467 element at address: 0x20001a6931c0 with size: 0.000183 MiB 00:15:28.467 element at address: 0x20001a693280 with size: 0.000183 MiB 00:15:28.467 element at address: 0x20001a693340 with size: 0.000183 MiB 00:15:28.467 element at address: 0x20001a693400 with size: 0.000183 MiB 00:15:28.467 element at address: 0x20001a6934c0 with size: 0.000183 MiB 00:15:28.467 element at address: 0x20001a693580 with size: 0.000183 MiB 00:15:28.467 element at address: 0x20001a693640 with size: 0.000183 MiB 00:15:28.467 element at address: 0x20001a693700 with size: 0.000183 MiB 00:15:28.467 element at address: 0x20001a6937c0 with size: 0.000183 MiB 00:15:28.467 element at address: 0x20001a693880 with size: 0.000183 MiB 00:15:28.467 element at address: 0x20001a693940 with size: 0.000183 MiB 00:15:28.467 element at address: 0x20001a693a00 with size: 0.000183 MiB 00:15:28.467 element at address: 0x20001a693ac0 with size: 0.000183 MiB 00:15:28.467 element at address: 0x20001a693b80 with size: 0.000183 MiB 00:15:28.467 element at address: 0x20001a693c40 with size: 0.000183 MiB 00:15:28.467 element at address: 0x20001a693d00 with size: 0.000183 MiB 00:15:28.467 element at address: 0x20001a693dc0 with size: 0.000183 MiB 00:15:28.467 element at address: 0x20001a693e80 with size: 0.000183 MiB 00:15:28.467 element at address: 0x20001a693f40 with size: 0.000183 MiB 00:15:28.467 element at address: 0x20001a694000 with size: 0.000183 MiB 00:15:28.467 element at address: 0x20001a6940c0 with size: 0.000183 MiB 00:15:28.467 element at address: 0x20001a694180 with size: 0.000183 MiB 00:15:28.467 element at address: 0x20001a694240 with size: 0.000183 MiB 00:15:28.467 element at address: 0x20001a694300 with size: 0.000183 MiB 00:15:28.467 element at address: 0x20001a6943c0 with size: 0.000183 MiB 00:15:28.467 element at address: 0x20001a694480 with size: 0.000183 MiB 00:15:28.467 element at address: 0x20001a694540 with size: 0.000183 MiB 00:15:28.467 element at address: 0x20001a694600 with size: 0.000183 MiB 00:15:28.467 element at address: 0x20001a6946c0 with size: 0.000183 MiB 00:15:28.467 element at address: 0x20001a694780 with size: 0.000183 MiB 00:15:28.467 element at address: 0x20001a694840 with size: 0.000183 MiB 00:15:28.467 element at address: 0x20001a694900 with size: 0.000183 MiB 00:15:28.467 element at address: 0x20001a6949c0 with size: 0.000183 MiB 00:15:28.467 element at address: 0x20001a694a80 with size: 0.000183 MiB 00:15:28.467 element at address: 0x20001a694b40 with size: 0.000183 MiB 00:15:28.467 element at address: 0x20001a694c00 with size: 0.000183 MiB 00:15:28.467 element at address: 0x20001a694cc0 with size: 0.000183 MiB 00:15:28.467 element at address: 0x20001a694d80 with size: 0.000183 MiB 00:15:28.467 element at address: 0x20001a694e40 with size: 0.000183 MiB 00:15:28.467 element at address: 0x20001a694f00 with size: 0.000183 MiB 00:15:28.467 element at address: 0x20001a694fc0 with size: 0.000183 MiB 00:15:28.467 element at address: 0x20001a695080 with size: 0.000183 MiB 00:15:28.467 element at address: 0x20001a695140 with size: 0.000183 MiB 00:15:28.467 element at address: 0x20001a695200 with size: 0.000183 MiB 00:15:28.467 element at address: 0x20001a6952c0 with size: 0.000183 MiB 00:15:28.467 element at address: 0x20001a695380 with size: 0.000183 MiB 00:15:28.467 element at address: 0x20001a695440 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a65b00 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a65bc0 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6c7c0 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6c9c0 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6ca80 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6cb40 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6cc00 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6ccc0 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6cd80 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6ce40 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6cf00 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6cfc0 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6d080 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6d140 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6d200 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6d2c0 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6d380 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6d440 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6d500 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6d5c0 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6d680 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6d740 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6d800 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6d8c0 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6d980 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6da40 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6db00 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6dbc0 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6dc80 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6dd40 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6de00 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6dec0 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6df80 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6e040 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6e100 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6e1c0 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6e280 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6e340 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6e400 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6e4c0 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6e580 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6e640 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6e700 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6e7c0 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6e880 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6e940 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6ea00 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6eac0 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6eb80 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6ec40 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6ed00 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6edc0 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6ee80 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6ef40 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6f000 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6f0c0 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6f180 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6f240 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6f300 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6f3c0 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6f480 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6f540 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6f600 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6f6c0 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6f780 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6f840 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6f900 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6f9c0 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6fa80 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6fb40 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6fc00 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6fcc0 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6fd80 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:15:28.467 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:15:28.467 list of memzone associated elements. size: 599.918884 MiB 00:15:28.467 element at address: 0x20001a695500 with size: 211.416748 MiB 00:15:28.467 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:15:28.467 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:15:28.467 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:15:28.467 element at address: 0x200012df4780 with size: 92.045044 MiB 00:15:28.467 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_59316_0 00:15:28.467 element at address: 0x200000dff380 with size: 48.003052 MiB 00:15:28.467 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59316_0 00:15:28.467 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:15:28.467 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59316_0 00:15:28.467 element at address: 0x2000191be940 with size: 20.255554 MiB 00:15:28.467 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:15:28.467 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:15:28.467 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:15:28.467 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:15:28.467 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59316_0 00:15:28.467 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:15:28.467 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59316 00:15:28.467 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:15:28.467 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59316 00:15:28.467 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:15:28.467 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:15:28.467 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:15:28.467 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:15:28.467 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:15:28.467 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:15:28.467 element at address: 0x200003efba40 with size: 1.008118 MiB 00:15:28.467 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:15:28.468 element at address: 0x200000cff180 with size: 1.000488 MiB 00:15:28.468 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59316 00:15:28.468 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:15:28.468 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59316 00:15:28.468 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:15:28.468 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59316 00:15:28.468 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:15:28.468 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59316 00:15:28.468 element at address: 0x20000087f740 with size: 0.500488 MiB 00:15:28.468 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59316 00:15:28.468 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:15:28.468 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59316 00:15:28.468 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:15:28.468 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:15:28.468 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:15:28.468 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:15:28.468 element at address: 0x20001907c540 with size: 0.250488 MiB 00:15:28.468 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:15:28.468 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:15:28.468 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59316 00:15:28.468 element at address: 0x20000085ed00 with size: 0.125488 MiB 00:15:28.468 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59316 00:15:28.468 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:15:28.468 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:15:28.468 element at address: 0x200027a65c80 with size: 0.023743 MiB 00:15:28.468 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:15:28.468 element at address: 0x20000085aa40 with size: 0.016113 MiB 00:15:28.468 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59316 00:15:28.468 element at address: 0x200027a6bdc0 with size: 0.002441 MiB 00:15:28.468 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:15:28.468 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:15:28.468 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59316 00:15:28.468 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:15:28.468 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59316 00:15:28.468 element at address: 0x20000085a840 with size: 0.000305 MiB 00:15:28.468 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59316 00:15:28.468 element at address: 0x200027a6c880 with size: 0.000305 MiB 00:15:28.468 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:15:28.468 12:14:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:15:28.468 12:14:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59316 00:15:28.468 12:14:24 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 59316 ']' 00:15:28.468 12:14:24 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 59316 00:15:28.468 12:14:24 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:15:28.468 12:14:24 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:28.468 12:14:24 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59316 00:15:28.468 12:14:24 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:28.468 12:14:24 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:28.468 12:14:24 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59316' 00:15:28.468 killing process with pid 59316 00:15:28.468 12:14:24 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 59316 00:15:28.468 12:14:24 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 59316 00:15:28.468 00:15:28.468 real 0m1.407s 00:15:28.468 user 0m1.560s 00:15:28.468 sys 0m0.290s 00:15:28.468 12:14:24 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:28.468 ************************************ 00:15:28.468 END TEST dpdk_mem_utility 00:15:28.468 ************************************ 00:15:28.468 12:14:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:15:28.727 12:14:24 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:15:28.727 12:14:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:28.727 12:14:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:28.727 12:14:24 -- common/autotest_common.sh@10 -- # set +x 00:15:28.727 ************************************ 00:15:28.727 START TEST event 00:15:28.727 ************************************ 00:15:28.727 12:14:24 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:15:28.727 * Looking for test storage... 00:15:28.727 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:15:28.727 12:14:24 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:28.727 12:14:24 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:28.727 12:14:24 event -- common/autotest_common.sh@1693 -- # lcov --version 00:15:28.727 12:14:24 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:28.727 12:14:24 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:28.727 12:14:24 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:28.727 12:14:24 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:28.727 12:14:24 event -- scripts/common.sh@336 -- # IFS=.-: 00:15:28.727 12:14:24 event -- scripts/common.sh@336 -- # read -ra ver1 00:15:28.727 12:14:24 event -- scripts/common.sh@337 -- # IFS=.-: 00:15:28.727 12:14:24 event -- scripts/common.sh@337 -- # read -ra ver2 00:15:28.727 12:14:24 event -- scripts/common.sh@338 -- # local 'op=<' 00:15:28.727 12:14:24 event -- scripts/common.sh@340 -- # ver1_l=2 00:15:28.727 12:14:24 event -- scripts/common.sh@341 -- # ver2_l=1 00:15:28.727 12:14:24 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:28.727 12:14:24 event -- scripts/common.sh@344 -- # case "$op" in 00:15:28.727 12:14:24 event -- scripts/common.sh@345 -- # : 1 00:15:28.727 12:14:24 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:28.727 12:14:24 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:28.727 12:14:24 event -- scripts/common.sh@365 -- # decimal 1 00:15:28.727 12:14:24 event -- scripts/common.sh@353 -- # local d=1 00:15:28.727 12:14:24 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:28.727 12:14:24 event -- scripts/common.sh@355 -- # echo 1 00:15:28.727 12:14:24 event -- scripts/common.sh@365 -- # ver1[v]=1 00:15:28.727 12:14:24 event -- scripts/common.sh@366 -- # decimal 2 00:15:28.727 12:14:24 event -- scripts/common.sh@353 -- # local d=2 00:15:28.727 12:14:24 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:28.727 12:14:24 event -- scripts/common.sh@355 -- # echo 2 00:15:28.727 12:14:24 event -- scripts/common.sh@366 -- # ver2[v]=2 00:15:28.727 12:14:24 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:28.727 12:14:24 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:28.727 12:14:24 event -- scripts/common.sh@368 -- # return 0 00:15:28.727 12:14:24 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:28.727 12:14:24 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:28.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.727 --rc genhtml_branch_coverage=1 00:15:28.727 --rc genhtml_function_coverage=1 00:15:28.727 --rc genhtml_legend=1 00:15:28.727 --rc geninfo_all_blocks=1 00:15:28.727 --rc geninfo_unexecuted_blocks=1 00:15:28.727 00:15:28.727 ' 00:15:28.727 12:14:24 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:28.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.727 --rc genhtml_branch_coverage=1 00:15:28.727 --rc genhtml_function_coverage=1 00:15:28.727 --rc genhtml_legend=1 00:15:28.727 --rc geninfo_all_blocks=1 00:15:28.727 --rc geninfo_unexecuted_blocks=1 00:15:28.727 00:15:28.727 ' 00:15:28.727 12:14:24 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:28.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.727 --rc genhtml_branch_coverage=1 00:15:28.727 --rc genhtml_function_coverage=1 00:15:28.727 --rc genhtml_legend=1 00:15:28.727 --rc geninfo_all_blocks=1 00:15:28.727 --rc geninfo_unexecuted_blocks=1 00:15:28.727 00:15:28.727 ' 00:15:28.727 12:14:24 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:28.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.727 --rc genhtml_branch_coverage=1 00:15:28.727 --rc genhtml_function_coverage=1 00:15:28.727 --rc genhtml_legend=1 00:15:28.727 --rc geninfo_all_blocks=1 00:15:28.727 --rc geninfo_unexecuted_blocks=1 00:15:28.727 00:15:28.727 ' 00:15:28.727 12:14:24 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:28.727 12:14:24 event -- bdev/nbd_common.sh@6 -- # set -e 00:15:28.727 12:14:24 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:15:28.727 12:14:24 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:15:28.727 12:14:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:28.727 12:14:24 event -- common/autotest_common.sh@10 -- # set +x 00:15:28.727 ************************************ 00:15:28.727 START TEST event_perf 00:15:28.727 ************************************ 00:15:28.727 12:14:24 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:15:28.727 Running I/O for 1 seconds...[2024-11-25 12:14:24.536322] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:15:28.727 [2024-11-25 12:14:24.536407] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59408 ] 00:15:28.986 [2024-11-25 12:14:24.675325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:28.986 [2024-11-25 12:14:24.714067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:28.986 [2024-11-25 12:14:24.714179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:28.986 [2024-11-25 12:14:24.714229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.986 [2024-11-25 12:14:24.714230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:29.921 Running I/O for 1 seconds... 00:15:29.921 lcore 0: 176347 00:15:29.921 lcore 1: 176349 00:15:29.921 lcore 2: 176348 00:15:29.921 lcore 3: 176350 00:15:29.921 done. 00:15:29.921 00:15:29.921 real 0m1.225s 00:15:29.921 user 0m4.066s 00:15:29.921 sys 0m0.036s 00:15:29.921 12:14:25 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:29.921 ************************************ 00:15:29.921 END TEST event_perf 00:15:29.921 ************************************ 00:15:29.921 12:14:25 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:15:29.921 12:14:25 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:15:29.921 12:14:25 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:29.921 12:14:25 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:29.921 12:14:25 event -- common/autotest_common.sh@10 -- # set +x 00:15:29.921 ************************************ 00:15:29.921 START TEST event_reactor 00:15:29.921 ************************************ 00:15:29.921 12:14:25 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:15:30.180 [2024-11-25 12:14:25.800343] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:15:30.180 [2024-11-25 12:14:25.800420] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59447 ] 00:15:30.180 [2024-11-25 12:14:25.942102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.180 [2024-11-25 12:14:25.978487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.566 test_start 00:15:31.566 oneshot 00:15:31.566 tick 100 00:15:31.566 tick 100 00:15:31.566 tick 250 00:15:31.566 tick 100 00:15:31.566 tick 100 00:15:31.566 tick 100 00:15:31.566 tick 250 00:15:31.566 tick 500 00:15:31.566 tick 100 00:15:31.566 tick 100 00:15:31.566 tick 250 00:15:31.566 tick 100 00:15:31.566 tick 100 00:15:31.566 test_end 00:15:31.566 00:15:31.566 real 0m1.224s 00:15:31.566 user 0m1.089s 00:15:31.566 sys 0m0.029s 00:15:31.566 12:14:27 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:31.566 12:14:27 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:15:31.566 ************************************ 00:15:31.566 END TEST event_reactor 00:15:31.566 ************************************ 00:15:31.566 12:14:27 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:15:31.566 12:14:27 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:31.566 12:14:27 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:31.566 12:14:27 event -- common/autotest_common.sh@10 -- # set +x 00:15:31.566 ************************************ 00:15:31.566 START TEST event_reactor_perf 00:15:31.566 ************************************ 00:15:31.566 12:14:27 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:15:31.566 [2024-11-25 12:14:27.065462] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:15:31.566 [2024-11-25 12:14:27.065630] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59477 ] 00:15:31.566 [2024-11-25 12:14:27.206019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.566 [2024-11-25 12:14:27.242554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.500 test_start 00:15:32.501 test_end 00:15:32.501 Performance: 386967 events per second 00:15:32.501 00:15:32.501 real 0m1.225s 00:15:32.501 user 0m1.090s 00:15:32.501 sys 0m0.028s 00:15:32.501 12:14:28 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:32.501 12:14:28 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:15:32.501 ************************************ 00:15:32.501 END TEST event_reactor_perf 00:15:32.501 ************************************ 00:15:32.501 12:14:28 event -- event/event.sh@49 -- # uname -s 00:15:32.501 12:14:28 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:15:32.501 12:14:28 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:15:32.501 12:14:28 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:32.501 12:14:28 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:32.501 12:14:28 event -- common/autotest_common.sh@10 -- # set +x 00:15:32.501 ************************************ 00:15:32.501 START TEST event_scheduler 00:15:32.501 ************************************ 00:15:32.501 12:14:28 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:15:32.759 * Looking for test storage... 00:15:32.759 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:15:32.759 12:14:28 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:32.759 12:14:28 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:32.759 12:14:28 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:15:32.759 12:14:28 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:32.759 12:14:28 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:32.759 12:14:28 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:32.759 12:14:28 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:32.759 12:14:28 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:15:32.759 12:14:28 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:15:32.759 12:14:28 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:15:32.759 12:14:28 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:15:32.759 12:14:28 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:15:32.759 12:14:28 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:15:32.759 12:14:28 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:15:32.759 12:14:28 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:32.759 12:14:28 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:15:32.759 12:14:28 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:15:32.759 12:14:28 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:32.759 12:14:28 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:32.759 12:14:28 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:15:32.759 12:14:28 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:15:32.759 12:14:28 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:32.759 12:14:28 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:15:32.759 12:14:28 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:15:32.759 12:14:28 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:15:32.759 12:14:28 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:15:32.759 12:14:28 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:32.759 12:14:28 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:15:32.759 12:14:28 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:15:32.759 12:14:28 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:32.759 12:14:28 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:32.759 12:14:28 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:15:32.759 12:14:28 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:32.759 12:14:28 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:32.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.759 --rc genhtml_branch_coverage=1 00:15:32.759 --rc genhtml_function_coverage=1 00:15:32.759 --rc genhtml_legend=1 00:15:32.759 --rc geninfo_all_blocks=1 00:15:32.759 --rc geninfo_unexecuted_blocks=1 00:15:32.759 00:15:32.759 ' 00:15:32.759 12:14:28 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:32.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.759 --rc genhtml_branch_coverage=1 00:15:32.759 --rc genhtml_function_coverage=1 00:15:32.759 --rc genhtml_legend=1 00:15:32.759 --rc geninfo_all_blocks=1 00:15:32.759 --rc geninfo_unexecuted_blocks=1 00:15:32.759 00:15:32.759 ' 00:15:32.759 12:14:28 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:32.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.759 --rc genhtml_branch_coverage=1 00:15:32.759 --rc genhtml_function_coverage=1 00:15:32.760 --rc genhtml_legend=1 00:15:32.760 --rc geninfo_all_blocks=1 00:15:32.760 --rc geninfo_unexecuted_blocks=1 00:15:32.760 00:15:32.760 ' 00:15:32.760 12:14:28 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:32.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.760 --rc genhtml_branch_coverage=1 00:15:32.760 --rc genhtml_function_coverage=1 00:15:32.760 --rc genhtml_legend=1 00:15:32.760 --rc geninfo_all_blocks=1 00:15:32.760 --rc geninfo_unexecuted_blocks=1 00:15:32.760 00:15:32.760 ' 00:15:32.760 12:14:28 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:15:32.760 12:14:28 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59546 00:15:32.760 12:14:28 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:15:32.760 12:14:28 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59546 00:15:32.760 12:14:28 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59546 ']' 00:15:32.760 12:14:28 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.760 12:14:28 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:32.760 12:14:28 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:15:32.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.760 12:14:28 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.760 12:14:28 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:32.760 12:14:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:15:32.760 [2024-11-25 12:14:28.499672] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:15:32.760 [2024-11-25 12:14:28.499746] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59546 ] 00:15:33.019 [2024-11-25 12:14:28.640813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:33.019 [2024-11-25 12:14:28.680906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.019 [2024-11-25 12:14:28.681125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:33.019 [2024-11-25 12:14:28.681909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:33.019 [2024-11-25 12:14:28.681914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:33.587 12:14:29 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:33.587 12:14:29 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:15:33.587 12:14:29 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:15:33.587 12:14:29 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.587 12:14:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:15:33.587 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:15:33.587 POWER: Cannot set governor of lcore 0 to userspace 00:15:33.587 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:15:33.587 POWER: Cannot set governor of lcore 0 to performance 00:15:33.587 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:15:33.587 POWER: Cannot set governor of lcore 0 to userspace 00:15:33.587 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:15:33.587 POWER: Cannot set governor of lcore 0 to userspace 00:15:33.587 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:15:33.587 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:15:33.587 POWER: Unable to set Power Management Environment for lcore 0 00:15:33.587 [2024-11-25 12:14:29.395565] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:15:33.587 [2024-11-25 12:14:29.395577] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:15:33.587 [2024-11-25 12:14:29.395583] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:15:33.587 [2024-11-25 12:14:29.395591] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:15:33.587 [2024-11-25 12:14:29.395595] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:15:33.587 [2024-11-25 12:14:29.395600] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:15:33.587 12:14:29 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.587 12:14:29 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:15:33.587 12:14:29 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.587 12:14:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:15:33.587 [2024-11-25 12:14:29.459330] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:15:33.587 12:14:29 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.587 12:14:29 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:15:33.587 12:14:29 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:33.587 12:14:29 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:33.587 12:14:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:15:33.844 ************************************ 00:15:33.844 START TEST scheduler_create_thread 00:15:33.844 ************************************ 00:15:33.844 12:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:15:33.844 12:14:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:15:33.844 12:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.844 12:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:33.844 2 00:15:33.844 12:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.844 12:14:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:15:33.844 12:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.844 12:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:33.844 3 00:15:33.844 12:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.844 12:14:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:15:33.844 12:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.844 12:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:33.844 4 00:15:33.844 12:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.844 12:14:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:15:33.844 12:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.844 12:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:33.844 5 00:15:33.844 12:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.844 12:14:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:15:33.844 12:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.844 12:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:33.844 6 00:15:33.844 12:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.844 12:14:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:15:33.844 12:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.844 12:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:33.844 7 00:15:33.844 12:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.844 12:14:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:15:33.844 12:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.844 12:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:33.844 8 00:15:33.844 12:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.844 12:14:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:15:33.844 12:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.844 12:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:33.844 9 00:15:33.844 12:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.844 12:14:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:15:33.844 12:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.844 12:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:33.844 10 00:15:33.844 12:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.844 12:14:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:15:33.844 12:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.844 12:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:33.844 12:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.844 12:14:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:15:33.844 12:14:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:15:33.844 12:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.844 12:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:33.845 12:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.845 12:14:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:15:33.845 12:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.845 12:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:33.845 12:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.845 12:14:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:15:33.845 12:14:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:15:33.845 12:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.845 12:14:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:34.410 12:14:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.410 00:15:34.410 real 0m0.588s 00:15:34.410 user 0m0.014s 00:15:34.410 sys 0m0.004s 00:15:34.410 12:14:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:34.410 12:14:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:15:34.410 ************************************ 00:15:34.410 END TEST scheduler_create_thread 00:15:34.410 ************************************ 00:15:34.410 12:14:30 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:15:34.410 12:14:30 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59546 00:15:34.410 12:14:30 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59546 ']' 00:15:34.410 12:14:30 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59546 00:15:34.410 12:14:30 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:15:34.410 12:14:30 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:34.410 12:14:30 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59546 00:15:34.410 12:14:30 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:15:34.410 12:14:30 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:15:34.410 killing process with pid 59546 00:15:34.410 12:14:30 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59546' 00:15:34.411 12:14:30 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59546 00:15:34.411 12:14:30 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59546 00:15:34.667 [2024-11-25 12:14:30.536781] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:15:34.925 00:15:34.925 real 0m2.330s 00:15:34.925 user 0m4.720s 00:15:34.925 sys 0m0.292s 00:15:34.925 12:14:30 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:34.925 ************************************ 00:15:34.925 END TEST event_scheduler 00:15:34.925 ************************************ 00:15:34.925 12:14:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:15:34.925 12:14:30 event -- event/event.sh@51 -- # modprobe -n nbd 00:15:34.925 12:14:30 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:15:34.925 12:14:30 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:34.925 12:14:30 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:34.925 12:14:30 event -- common/autotest_common.sh@10 -- # set +x 00:15:34.925 ************************************ 00:15:34.925 START TEST app_repeat 00:15:34.925 ************************************ 00:15:34.925 12:14:30 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:15:34.925 12:14:30 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:34.925 12:14:30 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:34.925 12:14:30 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:15:34.925 12:14:30 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:34.925 12:14:30 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:15:34.925 12:14:30 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:15:34.925 12:14:30 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:15:34.925 12:14:30 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59641 00:15:34.925 12:14:30 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:15:34.925 Process app_repeat pid: 59641 00:15:34.925 12:14:30 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59641' 00:15:34.925 12:14:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:15:34.925 spdk_app_start Round 0 00:15:34.925 12:14:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:15:34.925 12:14:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59641 /var/tmp/spdk-nbd.sock 00:15:34.925 12:14:30 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:15:34.925 12:14:30 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59641 ']' 00:15:34.925 12:14:30 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:34.925 12:14:30 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:34.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:34.925 12:14:30 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:34.925 12:14:30 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:34.925 12:14:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:15:34.925 [2024-11-25 12:14:30.711532] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:15:34.925 [2024-11-25 12:14:30.711605] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59641 ] 00:15:35.275 [2024-11-25 12:14:30.851417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:35.275 [2024-11-25 12:14:30.888212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:35.275 [2024-11-25 12:14:30.888452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.841 12:14:31 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:35.841 12:14:31 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:15:35.841 12:14:31 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:15:36.098 Malloc0 00:15:36.098 12:14:31 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:15:36.356 Malloc1 00:15:36.356 12:14:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:15:36.356 12:14:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:36.356 12:14:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:36.356 12:14:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:36.356 12:14:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:36.356 12:14:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:36.356 12:14:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:15:36.356 12:14:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:36.356 12:14:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:36.356 12:14:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:36.356 12:14:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:36.356 12:14:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:36.356 12:14:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:15:36.356 12:14:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:36.356 12:14:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:36.356 12:14:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:15:36.613 /dev/nbd0 00:15:36.613 12:14:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:36.613 12:14:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:36.613 12:14:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:36.613 12:14:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:15:36.613 12:14:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:36.613 12:14:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:36.613 12:14:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:36.613 12:14:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:15:36.613 12:14:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:36.613 12:14:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:36.613 12:14:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:15:36.613 1+0 records in 00:15:36.613 1+0 records out 00:15:36.613 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000490338 s, 8.4 MB/s 00:15:36.613 12:14:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:36.613 12:14:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:15:36.613 12:14:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:36.613 12:14:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:36.613 12:14:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:15:36.613 12:14:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:36.613 12:14:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:36.613 12:14:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:15:36.871 /dev/nbd1 00:15:36.871 12:14:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:36.871 12:14:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:36.871 12:14:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:36.871 12:14:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:15:36.871 12:14:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:36.871 12:14:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:36.871 12:14:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:36.871 12:14:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:15:36.871 12:14:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:36.871 12:14:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:36.871 12:14:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:15:36.871 1+0 records in 00:15:36.871 1+0 records out 00:15:36.871 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000208147 s, 19.7 MB/s 00:15:36.871 12:14:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:36.871 12:14:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:15:36.871 12:14:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:36.871 12:14:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:36.871 12:14:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:15:36.871 12:14:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:36.871 12:14:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:36.871 12:14:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:36.871 12:14:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:36.871 12:14:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:37.130 12:14:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:37.130 { 00:15:37.130 "bdev_name": "Malloc0", 00:15:37.130 "nbd_device": "/dev/nbd0" 00:15:37.130 }, 00:15:37.130 { 00:15:37.130 "bdev_name": "Malloc1", 00:15:37.130 "nbd_device": "/dev/nbd1" 00:15:37.130 } 00:15:37.130 ]' 00:15:37.130 12:14:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:37.130 12:14:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:37.130 { 00:15:37.130 "bdev_name": "Malloc0", 00:15:37.130 "nbd_device": "/dev/nbd0" 00:15:37.130 }, 00:15:37.130 { 00:15:37.130 "bdev_name": "Malloc1", 00:15:37.130 "nbd_device": "/dev/nbd1" 00:15:37.130 } 00:15:37.130 ]' 00:15:37.130 12:14:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:37.130 /dev/nbd1' 00:15:37.130 12:14:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:37.130 /dev/nbd1' 00:15:37.130 12:14:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:37.130 12:14:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:15:37.130 12:14:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:15:37.130 12:14:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:15:37.130 12:14:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:15:37.130 12:14:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:15:37.130 12:14:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:37.130 12:14:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:37.130 12:14:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:37.130 12:14:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:37.130 12:14:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:37.130 12:14:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:15:37.130 256+0 records in 00:15:37.130 256+0 records out 00:15:37.130 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00604425 s, 173 MB/s 00:15:37.130 12:14:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:37.130 12:14:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:37.130 256+0 records in 00:15:37.130 256+0 records out 00:15:37.130 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.015453 s, 67.9 MB/s 00:15:37.130 12:14:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:37.130 12:14:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:37.130 256+0 records in 00:15:37.130 256+0 records out 00:15:37.130 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140142 s, 74.8 MB/s 00:15:37.130 12:14:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:15:37.130 12:14:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:37.130 12:14:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:37.130 12:14:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:37.130 12:14:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:37.130 12:14:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:37.130 12:14:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:37.130 12:14:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:37.130 12:14:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:15:37.130 12:14:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:37.130 12:14:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:15:37.130 12:14:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:37.130 12:14:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:15:37.130 12:14:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:37.130 12:14:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:37.130 12:14:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:37.130 12:14:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:15:37.130 12:14:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:37.130 12:14:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:37.388 12:14:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:37.388 12:14:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:37.388 12:14:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:37.388 12:14:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:37.388 12:14:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:37.388 12:14:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:37.388 12:14:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:15:37.388 12:14:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:15:37.388 12:14:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:37.388 12:14:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:37.646 12:14:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:37.646 12:14:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:37.646 12:14:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:37.646 12:14:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:37.646 12:14:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:37.646 12:14:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:37.646 12:14:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:15:37.646 12:14:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:15:37.646 12:14:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:37.646 12:14:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:37.646 12:14:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:37.904 12:14:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:37.904 12:14:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:37.904 12:14:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:37.904 12:14:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:37.904 12:14:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:37.904 12:14:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:15:37.904 12:14:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:15:37.904 12:14:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:15:37.904 12:14:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:15:37.904 12:14:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:15:37.904 12:14:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:37.904 12:14:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:15:37.904 12:14:33 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:15:38.163 12:14:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:15:38.163 [2024-11-25 12:14:33.934834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:38.163 [2024-11-25 12:14:33.969915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:38.163 [2024-11-25 12:14:33.970071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.163 [2024-11-25 12:14:34.000895] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:15:38.163 [2024-11-25 12:14:34.000938] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:15:41.458 12:14:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:15:41.458 12:14:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:15:41.458 spdk_app_start Round 1 00:15:41.458 12:14:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59641 /var/tmp/spdk-nbd.sock 00:15:41.458 12:14:36 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59641 ']' 00:15:41.458 12:14:36 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:41.458 12:14:36 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:41.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:41.458 12:14:36 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:41.458 12:14:36 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:41.458 12:14:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:15:41.458 12:14:37 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:41.458 12:14:37 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:15:41.458 12:14:37 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:15:41.458 Malloc0 00:15:41.458 12:14:37 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:15:41.715 Malloc1 00:15:41.715 12:14:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:15:41.715 12:14:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:41.715 12:14:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:41.715 12:14:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:41.715 12:14:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:41.715 12:14:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:41.715 12:14:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:15:41.715 12:14:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:41.715 12:14:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:41.715 12:14:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:41.715 12:14:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:41.715 12:14:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:41.715 12:14:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:15:41.715 12:14:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:41.715 12:14:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:41.715 12:14:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:15:41.973 /dev/nbd0 00:15:41.973 12:14:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:41.973 12:14:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:41.973 12:14:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:41.973 12:14:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:15:41.973 12:14:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:41.973 12:14:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:41.973 12:14:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:41.973 12:14:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:15:41.973 12:14:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:41.973 12:14:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:41.973 12:14:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:15:41.973 1+0 records in 00:15:41.973 1+0 records out 00:15:41.973 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273833 s, 15.0 MB/s 00:15:41.973 12:14:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:41.973 12:14:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:15:41.973 12:14:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:41.973 12:14:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:41.973 12:14:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:15:41.973 12:14:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:41.973 12:14:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:41.973 12:14:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:15:42.231 /dev/nbd1 00:15:42.231 12:14:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:42.231 12:14:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:42.231 12:14:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:42.231 12:14:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:15:42.231 12:14:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:42.231 12:14:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:42.231 12:14:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:42.231 12:14:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:15:42.231 12:14:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:42.231 12:14:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:42.231 12:14:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:15:42.231 1+0 records in 00:15:42.231 1+0 records out 00:15:42.231 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199264 s, 20.6 MB/s 00:15:42.231 12:14:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:42.231 12:14:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:15:42.231 12:14:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:42.231 12:14:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:42.231 12:14:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:15:42.231 12:14:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:42.231 12:14:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:42.231 12:14:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:42.231 12:14:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:42.231 12:14:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:42.489 12:14:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:42.489 { 00:15:42.489 "bdev_name": "Malloc0", 00:15:42.489 "nbd_device": "/dev/nbd0" 00:15:42.489 }, 00:15:42.489 { 00:15:42.489 "bdev_name": "Malloc1", 00:15:42.489 "nbd_device": "/dev/nbd1" 00:15:42.489 } 00:15:42.489 ]' 00:15:42.489 12:14:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:42.489 { 00:15:42.489 "bdev_name": "Malloc0", 00:15:42.489 "nbd_device": "/dev/nbd0" 00:15:42.489 }, 00:15:42.489 { 00:15:42.489 "bdev_name": "Malloc1", 00:15:42.489 "nbd_device": "/dev/nbd1" 00:15:42.489 } 00:15:42.489 ]' 00:15:42.489 12:14:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:42.489 12:14:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:42.489 /dev/nbd1' 00:15:42.489 12:14:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:42.489 /dev/nbd1' 00:15:42.489 12:14:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:42.489 12:14:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:15:42.489 12:14:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:15:42.489 12:14:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:15:42.489 12:14:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:15:42.489 12:14:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:15:42.489 12:14:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:42.489 12:14:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:42.489 12:14:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:42.489 12:14:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:42.489 12:14:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:42.489 12:14:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:15:42.489 256+0 records in 00:15:42.489 256+0 records out 00:15:42.489 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00480853 s, 218 MB/s 00:15:42.489 12:14:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:42.489 12:14:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:42.489 256+0 records in 00:15:42.489 256+0 records out 00:15:42.489 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136123 s, 77.0 MB/s 00:15:42.489 12:14:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:42.489 12:14:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:42.489 256+0 records in 00:15:42.489 256+0 records out 00:15:42.489 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0160376 s, 65.4 MB/s 00:15:42.489 12:14:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:15:42.490 12:14:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:42.490 12:14:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:42.490 12:14:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:42.490 12:14:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:42.490 12:14:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:42.490 12:14:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:42.490 12:14:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:42.490 12:14:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:15:42.490 12:14:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:42.490 12:14:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:15:42.490 12:14:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:42.490 12:14:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:15:42.490 12:14:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:42.490 12:14:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:42.490 12:14:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:42.490 12:14:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:15:42.490 12:14:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:42.490 12:14:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:42.747 12:14:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:42.747 12:14:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:42.747 12:14:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:42.747 12:14:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:42.747 12:14:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:42.747 12:14:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:42.747 12:14:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:15:42.747 12:14:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:15:42.747 12:14:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:42.747 12:14:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:43.004 12:14:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:43.004 12:14:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:43.004 12:14:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:43.004 12:14:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:43.004 12:14:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:43.004 12:14:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:43.004 12:14:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:15:43.004 12:14:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:15:43.004 12:14:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:43.004 12:14:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:43.004 12:14:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:43.262 12:14:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:43.262 12:14:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:43.262 12:14:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:43.262 12:14:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:43.262 12:14:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:15:43.262 12:14:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:43.262 12:14:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:15:43.262 12:14:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:15:43.262 12:14:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:15:43.262 12:14:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:15:43.262 12:14:38 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:43.262 12:14:38 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:15:43.262 12:14:38 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:15:43.519 12:14:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:15:43.519 [2024-11-25 12:14:39.207316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:43.519 [2024-11-25 12:14:39.237356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:43.519 [2024-11-25 12:14:39.237366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.519 [2024-11-25 12:14:39.266119] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:15:43.519 [2024-11-25 12:14:39.266161] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:15:46.855 spdk_app_start Round 2 00:15:46.855 12:14:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:15:46.855 12:14:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:15:46.855 12:14:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59641 /var/tmp/spdk-nbd.sock 00:15:46.855 12:14:42 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59641 ']' 00:15:46.855 12:14:42 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:46.855 12:14:42 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:46.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:46.855 12:14:42 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:46.855 12:14:42 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:46.855 12:14:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:15:46.855 12:14:42 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:46.855 12:14:42 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:15:46.855 12:14:42 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:15:46.855 Malloc0 00:15:46.855 12:14:42 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:15:47.112 Malloc1 00:15:47.112 12:14:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:15:47.112 12:14:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:47.112 12:14:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:47.112 12:14:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:47.112 12:14:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:47.112 12:14:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:47.112 12:14:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:15:47.112 12:14:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:47.112 12:14:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:15:47.112 12:14:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:47.112 12:14:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:47.112 12:14:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:47.112 12:14:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:15:47.112 12:14:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:47.112 12:14:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:47.112 12:14:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:15:47.370 /dev/nbd0 00:15:47.370 12:14:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:47.370 12:14:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:47.370 12:14:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:47.370 12:14:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:15:47.370 12:14:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:47.370 12:14:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:47.370 12:14:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:47.370 12:14:43 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:15:47.370 12:14:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:47.370 12:14:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:47.370 12:14:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:15:47.370 1+0 records in 00:15:47.370 1+0 records out 00:15:47.370 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000173221 s, 23.6 MB/s 00:15:47.370 12:14:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:47.370 12:14:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:15:47.370 12:14:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:47.370 12:14:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:47.370 12:14:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:15:47.370 12:14:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:47.370 12:14:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:47.370 12:14:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:15:47.627 /dev/nbd1 00:15:47.627 12:14:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:47.627 12:14:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:47.627 12:14:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:47.627 12:14:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:15:47.627 12:14:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:47.627 12:14:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:47.627 12:14:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:47.627 12:14:43 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:15:47.627 12:14:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:47.627 12:14:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:47.627 12:14:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:15:47.627 1+0 records in 00:15:47.627 1+0 records out 00:15:47.627 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255008 s, 16.1 MB/s 00:15:47.627 12:14:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:47.627 12:14:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:15:47.627 12:14:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:15:47.627 12:14:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:47.627 12:14:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:15:47.627 12:14:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:47.627 12:14:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:47.627 12:14:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:47.627 12:14:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:47.627 12:14:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:47.885 12:14:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:47.885 { 00:15:47.885 "bdev_name": "Malloc0", 00:15:47.885 "nbd_device": "/dev/nbd0" 00:15:47.885 }, 00:15:47.885 { 00:15:47.885 "bdev_name": "Malloc1", 00:15:47.885 "nbd_device": "/dev/nbd1" 00:15:47.885 } 00:15:47.885 ]' 00:15:47.885 12:14:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:47.885 { 00:15:47.885 "bdev_name": "Malloc0", 00:15:47.885 "nbd_device": "/dev/nbd0" 00:15:47.885 }, 00:15:47.885 { 00:15:47.885 "bdev_name": "Malloc1", 00:15:47.885 "nbd_device": "/dev/nbd1" 00:15:47.885 } 00:15:47.885 ]' 00:15:47.885 12:14:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:47.885 12:14:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:47.885 /dev/nbd1' 00:15:47.885 12:14:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:47.885 /dev/nbd1' 00:15:47.885 12:14:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:47.885 12:14:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:15:47.885 12:14:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:15:47.885 12:14:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:15:47.885 12:14:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:15:47.885 12:14:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:15:47.885 12:14:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:47.885 12:14:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:47.885 12:14:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:47.885 12:14:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:47.885 12:14:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:47.885 12:14:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:15:47.885 256+0 records in 00:15:47.885 256+0 records out 00:15:47.885 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00422132 s, 248 MB/s 00:15:47.885 12:14:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:47.885 12:14:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:47.885 256+0 records in 00:15:47.885 256+0 records out 00:15:47.885 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142336 s, 73.7 MB/s 00:15:47.885 12:14:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:47.885 12:14:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:47.885 256+0 records in 00:15:47.885 256+0 records out 00:15:47.885 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0151112 s, 69.4 MB/s 00:15:47.885 12:14:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:15:47.885 12:14:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:47.885 12:14:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:47.885 12:14:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:47.885 12:14:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:47.885 12:14:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:47.885 12:14:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:47.885 12:14:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:47.885 12:14:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:15:47.885 12:14:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:47.885 12:14:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:15:47.885 12:14:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:15:47.885 12:14:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:15:47.885 12:14:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:47.885 12:14:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:47.885 12:14:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:47.885 12:14:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:15:47.885 12:14:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:47.885 12:14:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:48.143 12:14:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:48.143 12:14:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:48.143 12:14:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:48.143 12:14:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:48.143 12:14:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:48.143 12:14:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:48.143 12:14:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:15:48.143 12:14:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:15:48.143 12:14:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:48.143 12:14:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:48.401 12:14:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:48.401 12:14:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:48.401 12:14:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:48.401 12:14:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:48.401 12:14:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:48.401 12:14:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:48.401 12:14:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:15:48.401 12:14:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:15:48.401 12:14:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:48.401 12:14:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:48.401 12:14:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:48.659 12:14:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:48.659 12:14:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:48.659 12:14:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:48.659 12:14:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:48.659 12:14:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:15:48.659 12:14:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:48.659 12:14:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:15:48.659 12:14:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:15:48.659 12:14:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:15:48.659 12:14:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:15:48.659 12:14:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:48.659 12:14:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:15:48.659 12:14:44 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:15:48.917 12:14:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:15:48.917 [2024-11-25 12:14:44.600460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:48.917 [2024-11-25 12:14:44.630590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:48.917 [2024-11-25 12:14:44.630738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.917 [2024-11-25 12:14:44.659634] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:15:48.917 [2024-11-25 12:14:44.659667] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:15:52.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:52.196 12:14:47 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59641 /var/tmp/spdk-nbd.sock 00:15:52.196 12:14:47 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59641 ']' 00:15:52.196 12:14:47 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:52.196 12:14:47 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:52.196 12:14:47 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:52.196 12:14:47 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:52.196 12:14:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:15:52.196 12:14:47 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:52.196 12:14:47 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:15:52.196 12:14:47 event.app_repeat -- event/event.sh@39 -- # killprocess 59641 00:15:52.196 12:14:47 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59641 ']' 00:15:52.196 12:14:47 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59641 00:15:52.196 12:14:47 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:15:52.196 12:14:47 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:52.196 12:14:47 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59641 00:15:52.196 killing process with pid 59641 00:15:52.196 12:14:47 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:52.196 12:14:47 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:52.196 12:14:47 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59641' 00:15:52.196 12:14:47 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59641 00:15:52.196 12:14:47 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59641 00:15:52.196 spdk_app_start is called in Round 0. 00:15:52.196 Shutdown signal received, stop current app iteration 00:15:52.196 Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 reinitialization... 00:15:52.196 spdk_app_start is called in Round 1. 00:15:52.196 Shutdown signal received, stop current app iteration 00:15:52.196 Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 reinitialization... 00:15:52.196 spdk_app_start is called in Round 2. 00:15:52.196 Shutdown signal received, stop current app iteration 00:15:52.196 Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 reinitialization... 00:15:52.196 spdk_app_start is called in Round 3. 00:15:52.196 Shutdown signal received, stop current app iteration 00:15:52.196 ************************************ 00:15:52.196 END TEST app_repeat 00:15:52.196 ************************************ 00:15:52.196 12:14:47 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:15:52.196 12:14:47 event.app_repeat -- event/event.sh@42 -- # return 0 00:15:52.196 00:15:52.196 real 0m17.193s 00:15:52.196 user 0m38.555s 00:15:52.196 sys 0m2.275s 00:15:52.196 12:14:47 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:52.196 12:14:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:15:52.196 12:14:47 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:15:52.196 12:14:47 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:15:52.196 12:14:47 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:52.196 12:14:47 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:52.196 12:14:47 event -- common/autotest_common.sh@10 -- # set +x 00:15:52.196 ************************************ 00:15:52.196 START TEST cpu_locks 00:15:52.196 ************************************ 00:15:52.196 12:14:47 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:15:52.196 * Looking for test storage... 00:15:52.196 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:15:52.196 12:14:47 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:52.196 12:14:47 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:15:52.196 12:14:47 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:52.196 12:14:48 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:52.196 12:14:48 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:52.196 12:14:48 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:52.197 12:14:48 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:52.197 12:14:48 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:15:52.197 12:14:48 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:15:52.197 12:14:48 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:15:52.197 12:14:48 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:15:52.197 12:14:48 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:15:52.197 12:14:48 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:15:52.197 12:14:48 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:15:52.197 12:14:48 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:52.197 12:14:48 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:15:52.197 12:14:48 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:15:52.197 12:14:48 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:52.197 12:14:48 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:52.197 12:14:48 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:15:52.197 12:14:48 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:15:52.197 12:14:48 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:52.197 12:14:48 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:15:52.197 12:14:48 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:15:52.197 12:14:48 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:15:52.197 12:14:48 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:15:52.197 12:14:48 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:52.197 12:14:48 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:15:52.197 12:14:48 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:15:52.197 12:14:48 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:52.197 12:14:48 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:52.197 12:14:48 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:15:52.197 12:14:48 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:52.197 12:14:48 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:52.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.197 --rc genhtml_branch_coverage=1 00:15:52.197 --rc genhtml_function_coverage=1 00:15:52.197 --rc genhtml_legend=1 00:15:52.197 --rc geninfo_all_blocks=1 00:15:52.197 --rc geninfo_unexecuted_blocks=1 00:15:52.197 00:15:52.197 ' 00:15:52.197 12:14:48 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:52.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.197 --rc genhtml_branch_coverage=1 00:15:52.197 --rc genhtml_function_coverage=1 00:15:52.197 --rc genhtml_legend=1 00:15:52.197 --rc geninfo_all_blocks=1 00:15:52.197 --rc geninfo_unexecuted_blocks=1 00:15:52.197 00:15:52.197 ' 00:15:52.197 12:14:48 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:52.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.197 --rc genhtml_branch_coverage=1 00:15:52.197 --rc genhtml_function_coverage=1 00:15:52.197 --rc genhtml_legend=1 00:15:52.197 --rc geninfo_all_blocks=1 00:15:52.197 --rc geninfo_unexecuted_blocks=1 00:15:52.197 00:15:52.197 ' 00:15:52.197 12:14:48 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:52.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.197 --rc genhtml_branch_coverage=1 00:15:52.197 --rc genhtml_function_coverage=1 00:15:52.197 --rc genhtml_legend=1 00:15:52.197 --rc geninfo_all_blocks=1 00:15:52.197 --rc geninfo_unexecuted_blocks=1 00:15:52.197 00:15:52.197 ' 00:15:52.197 12:14:48 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:15:52.197 12:14:48 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:15:52.197 12:14:48 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:15:52.197 12:14:48 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:15:52.197 12:14:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:52.197 12:14:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:52.197 12:14:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:15:52.197 ************************************ 00:15:52.197 START TEST default_locks 00:15:52.197 ************************************ 00:15:52.197 12:14:48 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:15:52.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.197 12:14:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60258 00:15:52.197 12:14:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60258 00:15:52.197 12:14:48 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60258 ']' 00:15:52.197 12:14:48 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.197 12:14:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:15:52.197 12:14:48 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:52.197 12:14:48 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.197 12:14:48 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:52.197 12:14:48 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:15:52.455 [2024-11-25 12:14:48.110980] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:15:52.455 [2024-11-25 12:14:48.111048] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60258 ] 00:15:52.455 [2024-11-25 12:14:48.249325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.455 [2024-11-25 12:14:48.284114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.392 12:14:48 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:53.392 12:14:48 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:15:53.392 12:14:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60258 00:15:53.392 12:14:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60258 00:15:53.392 12:14:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:15:53.392 12:14:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60258 00:15:53.392 12:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 60258 ']' 00:15:53.392 12:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 60258 00:15:53.392 12:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:15:53.392 12:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:53.392 12:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60258 00:15:53.392 12:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:53.392 12:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:53.392 killing process with pid 60258 00:15:53.392 12:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60258' 00:15:53.392 12:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 60258 00:15:53.392 12:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 60258 00:15:53.649 12:14:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60258 00:15:53.649 12:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:15:53.649 12:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60258 00:15:53.649 12:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:15:53.649 12:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:53.649 12:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:15:53.649 12:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:53.649 12:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 60258 00:15:53.649 12:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60258 ']' 00:15:53.649 12:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.649 12:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:53.649 12:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.649 12:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:53.649 12:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:15:53.649 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60258) - No such process 00:15:53.649 ERROR: process (pid: 60258) is no longer running 00:15:53.649 12:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:53.649 12:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:15:53.649 12:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:15:53.649 12:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:53.649 12:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:53.649 12:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:53.649 12:14:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:15:53.649 12:14:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:15:53.649 12:14:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:15:53.649 12:14:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:15:53.649 00:15:53.649 real 0m1.224s 00:15:53.649 user 0m1.272s 00:15:53.649 sys 0m0.308s 00:15:53.649 12:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:53.649 12:14:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:15:53.649 ************************************ 00:15:53.649 END TEST default_locks 00:15:53.649 ************************************ 00:15:53.649 12:14:49 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:15:53.649 12:14:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:53.649 12:14:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:53.649 12:14:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:15:53.649 ************************************ 00:15:53.649 START TEST default_locks_via_rpc 00:15:53.649 ************************************ 00:15:53.649 12:14:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:15:53.649 12:14:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60311 00:15:53.649 12:14:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60311 00:15:53.649 12:14:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60311 ']' 00:15:53.649 12:14:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.649 12:14:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:53.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.649 12:14:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.649 12:14:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:53.649 12:14:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:53.649 12:14:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:15:53.649 [2024-11-25 12:14:49.367828] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:15:53.649 [2024-11-25 12:14:49.368270] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60311 ] 00:15:53.649 [2024-11-25 12:14:49.506219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.907 [2024-11-25 12:14:49.541660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.474 12:14:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:54.474 12:14:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:54.474 12:14:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:15:54.474 12:14:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.474 12:14:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.474 12:14:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.474 12:14:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:15:54.474 12:14:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:15:54.474 12:14:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:15:54.474 12:14:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:15:54.474 12:14:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:15:54.474 12:14:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.474 12:14:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.474 12:14:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.474 12:14:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60311 00:15:54.474 12:14:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60311 00:15:54.474 12:14:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:15:54.733 12:14:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60311 00:15:54.733 12:14:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 60311 ']' 00:15:54.733 12:14:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 60311 00:15:54.733 12:14:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:15:54.733 12:14:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:54.733 12:14:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60311 00:15:54.733 12:14:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:54.733 12:14:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:54.733 killing process with pid 60311 00:15:54.733 12:14:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60311' 00:15:54.733 12:14:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 60311 00:15:54.733 12:14:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 60311 00:15:54.992 00:15:54.992 real 0m1.317s 00:15:54.992 user 0m1.433s 00:15:54.992 sys 0m0.317s 00:15:54.992 12:14:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:54.992 12:14:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:54.992 ************************************ 00:15:54.992 END TEST default_locks_via_rpc 00:15:54.992 ************************************ 00:15:54.992 12:14:50 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:15:54.992 12:14:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:54.992 12:14:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:54.992 12:14:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:15:54.992 ************************************ 00:15:54.992 START TEST non_locking_app_on_locked_coremask 00:15:54.992 ************************************ 00:15:54.992 12:14:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:15:54.992 12:14:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60369 00:15:54.992 12:14:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60369 /var/tmp/spdk.sock 00:15:54.992 12:14:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60369 ']' 00:15:54.992 12:14:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:15:54.992 12:14:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.992 12:14:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:54.992 12:14:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.992 12:14:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:54.992 12:14:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:15:54.992 [2024-11-25 12:14:50.731621] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:15:54.992 [2024-11-25 12:14:50.731693] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60369 ] 00:15:55.251 [2024-11-25 12:14:50.875149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.251 [2024-11-25 12:14:50.911852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.251 12:14:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:55.251 12:14:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:15:55.251 12:14:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60389 00:15:55.251 12:14:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60389 /var/tmp/spdk2.sock 00:15:55.251 12:14:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60389 ']' 00:15:55.251 12:14:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:15:55.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:15:55.251 12:14:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:55.251 12:14:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:15:55.251 12:14:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:15:55.251 12:14:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:55.251 12:14:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:15:55.508 [2024-11-25 12:14:51.144366] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:15:55.509 [2024-11-25 12:14:51.144434] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60389 ] 00:15:55.509 [2024-11-25 12:14:51.296942] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:15:55.509 [2024-11-25 12:14:51.296988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.509 [2024-11-25 12:14:51.371620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.137 12:14:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:56.137 12:14:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:15:56.137 12:14:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60369 00:15:56.137 12:14:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60369 00:15:56.137 12:14:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:15:56.703 12:14:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60369 00:15:56.703 12:14:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60369 ']' 00:15:56.703 12:14:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60369 00:15:56.703 12:14:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:15:56.703 12:14:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:56.703 12:14:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60369 00:15:56.703 12:14:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:56.703 killing process with pid 60369 00:15:56.703 12:14:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:56.703 12:14:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60369' 00:15:56.703 12:14:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60369 00:15:56.703 12:14:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60369 00:15:56.961 12:14:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60389 00:15:56.961 12:14:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60389 ']' 00:15:56.961 12:14:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60389 00:15:56.961 12:14:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:15:56.961 12:14:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:56.961 12:14:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60389 00:15:56.961 12:14:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:56.961 killing process with pid 60389 00:15:56.961 12:14:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:56.961 12:14:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60389' 00:15:56.961 12:14:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60389 00:15:56.961 12:14:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60389 00:15:57.219 00:15:57.219 real 0m2.251s 00:15:57.219 user 0m2.474s 00:15:57.219 sys 0m0.640s 00:15:57.219 12:14:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:57.219 ************************************ 00:15:57.219 END TEST non_locking_app_on_locked_coremask 00:15:57.219 ************************************ 00:15:57.219 12:14:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:15:57.219 12:14:52 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:15:57.219 12:14:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:57.219 12:14:52 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:57.219 12:14:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:15:57.219 ************************************ 00:15:57.219 START TEST locking_app_on_unlocked_coremask 00:15:57.219 ************************************ 00:15:57.219 12:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:15:57.219 12:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60451 00:15:57.219 12:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60451 /var/tmp/spdk.sock 00:15:57.219 12:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60451 ']' 00:15:57.219 12:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.219 12:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:15:57.219 12:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:57.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.219 12:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.219 12:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:57.219 12:14:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:15:57.219 [2024-11-25 12:14:53.032242] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:15:57.219 [2024-11-25 12:14:53.032298] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60451 ] 00:15:57.476 [2024-11-25 12:14:53.170106] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:15:57.476 [2024-11-25 12:14:53.170143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.476 [2024-11-25 12:14:53.206154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.041 12:14:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:58.041 12:14:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:15:58.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:15:58.041 12:14:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60474 00:15:58.041 12:14:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60474 /var/tmp/spdk2.sock 00:15:58.041 12:14:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60474 ']' 00:15:58.041 12:14:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:15:58.041 12:14:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:15:58.041 12:14:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:58.041 12:14:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:15:58.041 12:14:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:58.041 12:14:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:15:58.305 [2024-11-25 12:14:53.955954] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:15:58.305 [2024-11-25 12:14:53.956016] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60474 ] 00:15:58.305 [2024-11-25 12:14:54.111046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.563 [2024-11-25 12:14:54.184296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.127 12:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:59.127 12:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:15:59.127 12:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60474 00:15:59.127 12:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60474 00:15:59.127 12:14:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:15:59.385 12:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60451 00:15:59.385 12:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60451 ']' 00:15:59.385 12:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60451 00:15:59.385 12:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:15:59.385 12:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:59.385 12:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60451 00:15:59.385 killing process with pid 60451 00:15:59.385 12:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:59.385 12:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:59.385 12:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60451' 00:15:59.385 12:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60451 00:15:59.385 12:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60451 00:15:59.950 12:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60474 00:15:59.950 12:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60474 ']' 00:15:59.950 12:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60474 00:15:59.950 12:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:15:59.951 12:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:59.951 12:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60474 00:15:59.951 killing process with pid 60474 00:15:59.951 12:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:59.951 12:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:59.951 12:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60474' 00:15:59.951 12:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60474 00:15:59.951 12:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60474 00:15:59.951 ************************************ 00:15:59.951 END TEST locking_app_on_unlocked_coremask 00:15:59.951 ************************************ 00:15:59.951 00:15:59.951 real 0m2.762s 00:15:59.951 user 0m3.208s 00:15:59.951 sys 0m0.633s 00:15:59.951 12:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:59.951 12:14:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:15:59.951 12:14:55 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:15:59.951 12:14:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:59.951 12:14:55 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:59.951 12:14:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:15:59.951 ************************************ 00:15:59.951 START TEST locking_app_on_locked_coremask 00:15:59.951 ************************************ 00:15:59.951 12:14:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:15:59.951 12:14:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60542 00:15:59.951 12:14:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60542 /var/tmp/spdk.sock 00:15:59.951 12:14:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60542 ']' 00:15:59.951 12:14:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:15:59.951 12:14:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:59.951 12:14:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:59.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:59.951 12:14:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:59.951 12:14:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:59.951 12:14:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:00.209 [2024-11-25 12:14:55.856130] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:16:00.209 [2024-11-25 12:14:55.856196] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60542 ] 00:16:00.209 [2024-11-25 12:14:55.990567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.209 [2024-11-25 12:14:56.021477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.142 12:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:01.142 12:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:16:01.142 12:14:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60570 00:16:01.142 12:14:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60570 /var/tmp/spdk2.sock 00:16:01.142 12:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:16:01.142 12:14:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:16:01.142 12:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60570 /var/tmp/spdk2.sock 00:16:01.142 12:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:16:01.142 12:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:01.142 12:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:16:01.142 12:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:01.142 12:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60570 /var/tmp/spdk2.sock 00:16:01.142 12:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60570 ']' 00:16:01.142 12:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:16:01.142 12:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:01.142 12:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:16:01.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:16:01.142 12:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:01.142 12:14:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:01.142 [2024-11-25 12:14:56.766546] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:16:01.142 [2024-11-25 12:14:56.766601] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60570 ] 00:16:01.142 [2024-11-25 12:14:56.907823] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60542 has claimed it. 00:16:01.142 [2024-11-25 12:14:56.907866] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:16:01.749 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60570) - No such process 00:16:01.750 ERROR: process (pid: 60570) is no longer running 00:16:01.750 12:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:01.750 12:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:16:01.750 12:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:16:01.750 12:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:01.750 12:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:01.750 12:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:01.750 12:14:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60542 00:16:01.750 12:14:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60542 00:16:01.750 12:14:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:16:02.007 12:14:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60542 00:16:02.007 12:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60542 ']' 00:16:02.007 12:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60542 00:16:02.007 12:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:16:02.007 12:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:02.007 12:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60542 00:16:02.007 12:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:02.007 12:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:02.007 12:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60542' 00:16:02.007 killing process with pid 60542 00:16:02.007 12:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60542 00:16:02.007 12:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60542 00:16:02.266 00:16:02.266 real 0m2.097s 00:16:02.266 user 0m2.425s 00:16:02.266 sys 0m0.413s 00:16:02.266 12:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:02.266 12:14:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:02.266 ************************************ 00:16:02.266 END TEST locking_app_on_locked_coremask 00:16:02.266 ************************************ 00:16:02.266 12:14:57 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:16:02.266 12:14:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:02.266 12:14:57 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:02.266 12:14:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:16:02.266 ************************************ 00:16:02.266 START TEST locking_overlapped_coremask 00:16:02.266 ************************************ 00:16:02.266 12:14:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:16:02.266 12:14:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60616 00:16:02.266 12:14:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60616 /var/tmp/spdk.sock 00:16:02.266 12:14:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60616 ']' 00:16:02.266 12:14:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.266 12:14:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:02.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.266 12:14:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.266 12:14:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:02.266 12:14:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:02.266 12:14:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:16:02.266 [2024-11-25 12:14:57.989325] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:16:02.266 [2024-11-25 12:14:57.989404] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60616 ] 00:16:02.266 [2024-11-25 12:14:58.124723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:02.525 [2024-11-25 12:14:58.159691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:02.525 [2024-11-25 12:14:58.159768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.525 [2024-11-25 12:14:58.159778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:03.091 12:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:03.091 12:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:16:03.091 12:14:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60646 00:16:03.092 12:14:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:16:03.092 12:14:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60646 /var/tmp/spdk2.sock 00:16:03.092 12:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:16:03.092 12:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60646 /var/tmp/spdk2.sock 00:16:03.092 12:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:16:03.092 12:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:03.092 12:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:16:03.092 12:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:03.092 12:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60646 /var/tmp/spdk2.sock 00:16:03.092 12:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60646 ']' 00:16:03.092 12:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:16:03.092 12:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:03.092 12:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:16:03.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:16:03.092 12:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:03.092 12:14:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:03.092 [2024-11-25 12:14:58.909858] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:16:03.092 [2024-11-25 12:14:58.909929] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60646 ] 00:16:03.350 [2024-11-25 12:14:59.071654] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60616 has claimed it. 00:16:03.350 [2024-11-25 12:14:59.071715] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:16:03.915 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60646) - No such process 00:16:03.915 ERROR: process (pid: 60646) is no longer running 00:16:03.915 12:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:03.915 12:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:16:03.915 12:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:16:03.915 12:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:03.915 12:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:03.915 12:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:03.915 12:14:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:16:03.915 12:14:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:16:03.915 12:14:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:16:03.915 12:14:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:16:03.915 12:14:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60616 00:16:03.915 12:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60616 ']' 00:16:03.915 12:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60616 00:16:03.915 12:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:16:03.915 12:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:03.915 12:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60616 00:16:03.915 12:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:03.915 12:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:03.915 12:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60616' 00:16:03.915 killing process with pid 60616 00:16:03.915 12:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60616 00:16:03.915 12:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60616 00:16:04.173 00:16:04.173 real 0m1.860s 00:16:04.173 user 0m5.340s 00:16:04.173 sys 0m0.299s 00:16:04.173 12:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:04.173 ************************************ 00:16:04.173 END TEST locking_overlapped_coremask 00:16:04.173 ************************************ 00:16:04.173 12:14:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:16:04.173 12:14:59 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:16:04.173 12:14:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:04.173 12:14:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:04.173 12:14:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:16:04.173 ************************************ 00:16:04.173 START TEST locking_overlapped_coremask_via_rpc 00:16:04.173 ************************************ 00:16:04.173 12:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:16:04.173 12:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60692 00:16:04.173 12:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60692 /var/tmp/spdk.sock 00:16:04.173 12:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60692 ']' 00:16:04.173 12:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:04.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:04.173 12:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:04.173 12:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:04.173 12:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:04.173 12:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:16:04.173 12:14:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:04.173 [2024-11-25 12:14:59.889544] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:16:04.173 [2024-11-25 12:14:59.889618] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60692 ] 00:16:04.173 [2024-11-25 12:15:00.026898] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:16:04.173 [2024-11-25 12:15:00.026944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:04.431 [2024-11-25 12:15:00.062525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:04.431 [2024-11-25 12:15:00.062645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.431 [2024-11-25 12:15:00.062647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:05.049 12:15:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:05.049 12:15:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:05.049 12:15:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60723 00:16:05.049 12:15:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:16:05.049 12:15:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60723 /var/tmp/spdk2.sock 00:16:05.049 12:15:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60723 ']' 00:16:05.049 12:15:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:16:05.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:16:05.049 12:15:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:05.049 12:15:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:16:05.049 12:15:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:05.049 12:15:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:05.049 [2024-11-25 12:15:00.812026] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:16:05.049 [2024-11-25 12:15:00.812450] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60723 ] 00:16:05.307 [2024-11-25 12:15:00.968881] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:16:05.307 [2024-11-25 12:15:00.968929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:05.307 [2024-11-25 12:15:01.047944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:05.307 [2024-11-25 12:15:01.048137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:05.307 [2024-11-25 12:15:01.048139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:16:06.242 12:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:06.242 12:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:06.242 12:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:16:06.242 12:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.242 12:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:06.242 12:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.242 12:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:16:06.242 12:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:06.242 12:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:16:06.242 12:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:06.242 12:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:06.242 12:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:06.242 12:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:06.242 12:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:16:06.242 12:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.242 12:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:06.242 [2024-11-25 12:15:01.820879] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60692 has claimed it. 00:16:06.242 2024/11/25 12:15:01 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:16:06.242 request: 00:16:06.242 { 00:16:06.242 "method": "framework_enable_cpumask_locks", 00:16:06.242 "params": {} 00:16:06.242 } 00:16:06.242 Got JSON-RPC error response 00:16:06.242 GoRPCClient: error on JSON-RPC call 00:16:06.242 12:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:06.242 12:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:06.242 12:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:06.242 12:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:06.242 12:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:06.242 12:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60692 /var/tmp/spdk.sock 00:16:06.242 12:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60692 ']' 00:16:06.242 12:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:06.242 12:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:06.242 12:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:06.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:06.242 12:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:06.242 12:15:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:06.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:16:06.242 12:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:06.242 12:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:06.242 12:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60723 /var/tmp/spdk2.sock 00:16:06.242 12:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60723 ']' 00:16:06.242 12:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:16:06.242 12:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:06.242 12:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:16:06.242 12:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:06.242 12:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:06.500 ************************************ 00:16:06.500 END TEST locking_overlapped_coremask_via_rpc 00:16:06.501 ************************************ 00:16:06.501 12:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:06.501 12:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:06.501 12:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:16:06.501 12:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:16:06.501 12:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:16:06.501 12:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:16:06.501 00:16:06.501 real 0m2.409s 00:16:06.501 user 0m1.186s 00:16:06.501 sys 0m0.172s 00:16:06.501 12:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:06.501 12:15:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:06.501 12:15:02 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:16:06.501 12:15:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60692 ]] 00:16:06.501 12:15:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60692 00:16:06.501 12:15:02 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60692 ']' 00:16:06.501 12:15:02 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60692 00:16:06.501 12:15:02 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:16:06.501 12:15:02 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:06.501 12:15:02 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60692 00:16:06.501 killing process with pid 60692 00:16:06.501 12:15:02 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:06.501 12:15:02 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:06.501 12:15:02 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60692' 00:16:06.501 12:15:02 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60692 00:16:06.501 12:15:02 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60692 00:16:06.759 12:15:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60723 ]] 00:16:06.759 12:15:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60723 00:16:06.759 12:15:02 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60723 ']' 00:16:06.759 12:15:02 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60723 00:16:06.759 12:15:02 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:16:06.759 12:15:02 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:06.759 12:15:02 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60723 00:16:06.759 killing process with pid 60723 00:16:06.759 12:15:02 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:06.759 12:15:02 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:06.759 12:15:02 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60723' 00:16:06.759 12:15:02 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60723 00:16:06.759 12:15:02 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60723 00:16:07.018 12:15:02 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:16:07.018 12:15:02 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:16:07.018 12:15:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60692 ]] 00:16:07.018 12:15:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60692 00:16:07.018 12:15:02 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60692 ']' 00:16:07.018 12:15:02 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60692 00:16:07.018 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60692) - No such process 00:16:07.018 Process with pid 60692 is not found 00:16:07.018 12:15:02 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60692 is not found' 00:16:07.018 12:15:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60723 ]] 00:16:07.018 12:15:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60723 00:16:07.018 12:15:02 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60723 ']' 00:16:07.018 Process with pid 60723 is not found 00:16:07.018 12:15:02 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60723 00:16:07.018 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60723) - No such process 00:16:07.018 12:15:02 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60723 is not found' 00:16:07.018 12:15:02 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:16:07.018 00:16:07.018 real 0m14.814s 00:16:07.018 user 0m28.092s 00:16:07.018 sys 0m3.367s 00:16:07.018 12:15:02 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:07.018 ************************************ 00:16:07.018 END TEST cpu_locks 00:16:07.018 ************************************ 00:16:07.018 12:15:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:16:07.018 00:16:07.018 real 0m38.395s 00:16:07.018 user 1m17.783s 00:16:07.018 sys 0m6.244s 00:16:07.018 12:15:02 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:07.018 ************************************ 00:16:07.018 END TEST event 00:16:07.018 ************************************ 00:16:07.018 12:15:02 event -- common/autotest_common.sh@10 -- # set +x 00:16:07.018 12:15:02 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:16:07.018 12:15:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:07.018 12:15:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:07.018 12:15:02 -- common/autotest_common.sh@10 -- # set +x 00:16:07.018 ************************************ 00:16:07.018 START TEST thread 00:16:07.018 ************************************ 00:16:07.018 12:15:02 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:16:07.018 * Looking for test storage... 00:16:07.018 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:16:07.018 12:15:02 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:07.018 12:15:02 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:16:07.018 12:15:02 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:07.276 12:15:02 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:07.276 12:15:02 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:07.276 12:15:02 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:07.276 12:15:02 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:07.276 12:15:02 thread -- scripts/common.sh@336 -- # IFS=.-: 00:16:07.276 12:15:02 thread -- scripts/common.sh@336 -- # read -ra ver1 00:16:07.276 12:15:02 thread -- scripts/common.sh@337 -- # IFS=.-: 00:16:07.276 12:15:02 thread -- scripts/common.sh@337 -- # read -ra ver2 00:16:07.276 12:15:02 thread -- scripts/common.sh@338 -- # local 'op=<' 00:16:07.277 12:15:02 thread -- scripts/common.sh@340 -- # ver1_l=2 00:16:07.277 12:15:02 thread -- scripts/common.sh@341 -- # ver2_l=1 00:16:07.277 12:15:02 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:07.277 12:15:02 thread -- scripts/common.sh@344 -- # case "$op" in 00:16:07.277 12:15:02 thread -- scripts/common.sh@345 -- # : 1 00:16:07.277 12:15:02 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:07.277 12:15:02 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:07.277 12:15:02 thread -- scripts/common.sh@365 -- # decimal 1 00:16:07.277 12:15:02 thread -- scripts/common.sh@353 -- # local d=1 00:16:07.277 12:15:02 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:07.277 12:15:02 thread -- scripts/common.sh@355 -- # echo 1 00:16:07.277 12:15:02 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:16:07.277 12:15:02 thread -- scripts/common.sh@366 -- # decimal 2 00:16:07.277 12:15:02 thread -- scripts/common.sh@353 -- # local d=2 00:16:07.277 12:15:02 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:07.277 12:15:02 thread -- scripts/common.sh@355 -- # echo 2 00:16:07.277 12:15:02 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:16:07.277 12:15:02 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:07.277 12:15:02 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:07.277 12:15:02 thread -- scripts/common.sh@368 -- # return 0 00:16:07.277 12:15:02 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:07.277 12:15:02 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:07.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.277 --rc genhtml_branch_coverage=1 00:16:07.277 --rc genhtml_function_coverage=1 00:16:07.277 --rc genhtml_legend=1 00:16:07.277 --rc geninfo_all_blocks=1 00:16:07.277 --rc geninfo_unexecuted_blocks=1 00:16:07.277 00:16:07.277 ' 00:16:07.277 12:15:02 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:07.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.277 --rc genhtml_branch_coverage=1 00:16:07.277 --rc genhtml_function_coverage=1 00:16:07.277 --rc genhtml_legend=1 00:16:07.277 --rc geninfo_all_blocks=1 00:16:07.277 --rc geninfo_unexecuted_blocks=1 00:16:07.277 00:16:07.277 ' 00:16:07.277 12:15:02 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:07.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.277 --rc genhtml_branch_coverage=1 00:16:07.277 --rc genhtml_function_coverage=1 00:16:07.277 --rc genhtml_legend=1 00:16:07.277 --rc geninfo_all_blocks=1 00:16:07.277 --rc geninfo_unexecuted_blocks=1 00:16:07.277 00:16:07.277 ' 00:16:07.277 12:15:02 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:07.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:07.277 --rc genhtml_branch_coverage=1 00:16:07.277 --rc genhtml_function_coverage=1 00:16:07.277 --rc genhtml_legend=1 00:16:07.277 --rc geninfo_all_blocks=1 00:16:07.277 --rc geninfo_unexecuted_blocks=1 00:16:07.277 00:16:07.277 ' 00:16:07.277 12:15:02 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:16:07.277 12:15:02 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:16:07.277 12:15:02 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:07.277 12:15:02 thread -- common/autotest_common.sh@10 -- # set +x 00:16:07.277 ************************************ 00:16:07.277 START TEST thread_poller_perf 00:16:07.277 ************************************ 00:16:07.277 12:15:02 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:16:07.277 [2024-11-25 12:15:02.955337] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:16:07.277 [2024-11-25 12:15:02.955425] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60872 ] 00:16:07.277 [2024-11-25 12:15:03.091411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:07.277 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:16:07.277 [2024-11-25 12:15:03.122703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.663 [2024-11-25T12:15:04.539Z] ====================================== 00:16:08.663 [2024-11-25T12:15:04.539Z] busy:2610810076 (cyc) 00:16:08.663 [2024-11-25T12:15:04.539Z] total_run_count: 389000 00:16:08.663 [2024-11-25T12:15:04.539Z] tsc_hz: 2600000000 (cyc) 00:16:08.663 [2024-11-25T12:15:04.539Z] ====================================== 00:16:08.663 [2024-11-25T12:15:04.539Z] poller_cost: 6711 (cyc), 2581 (nsec) 00:16:08.663 00:16:08.663 real 0m1.216s 00:16:08.663 user 0m1.082s 00:16:08.663 sys 0m0.029s 00:16:08.663 ************************************ 00:16:08.663 END TEST thread_poller_perf 00:16:08.663 12:15:04 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:08.663 12:15:04 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:16:08.663 ************************************ 00:16:08.663 12:15:04 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:16:08.663 12:15:04 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:16:08.663 12:15:04 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:08.663 12:15:04 thread -- common/autotest_common.sh@10 -- # set +x 00:16:08.663 ************************************ 00:16:08.663 START TEST thread_poller_perf 00:16:08.663 ************************************ 00:16:08.663 12:15:04 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:16:08.663 [2024-11-25 12:15:04.214387] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:16:08.663 [2024-11-25 12:15:04.214722] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60902 ] 00:16:08.663 [2024-11-25 12:15:04.354609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.663 [2024-11-25 12:15:04.390630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.663 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:16:09.597 [2024-11-25T12:15:05.473Z] ====================================== 00:16:09.597 [2024-11-25T12:15:05.473Z] busy:2602119038 (cyc) 00:16:09.597 [2024-11-25T12:15:05.473Z] total_run_count: 4112000 00:16:09.597 [2024-11-25T12:15:05.473Z] tsc_hz: 2600000000 (cyc) 00:16:09.597 [2024-11-25T12:15:05.473Z] ====================================== 00:16:09.597 [2024-11-25T12:15:05.473Z] poller_cost: 632 (cyc), 243 (nsec) 00:16:09.597 00:16:09.597 real 0m1.223s 00:16:09.597 user 0m1.090s 00:16:09.597 sys 0m0.027s 00:16:09.597 12:15:05 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:09.597 ************************************ 00:16:09.597 END TEST thread_poller_perf 00:16:09.597 ************************************ 00:16:09.597 12:15:05 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:16:09.597 12:15:05 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:16:09.597 00:16:09.597 real 0m2.658s 00:16:09.597 user 0m2.276s 00:16:09.597 sys 0m0.169s 00:16:09.597 12:15:05 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:09.597 ************************************ 00:16:09.597 END TEST thread 00:16:09.597 ************************************ 00:16:09.597 12:15:05 thread -- common/autotest_common.sh@10 -- # set +x 00:16:09.855 12:15:05 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:16:09.855 12:15:05 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:16:09.855 12:15:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:09.855 12:15:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:09.855 12:15:05 -- common/autotest_common.sh@10 -- # set +x 00:16:09.855 ************************************ 00:16:09.855 START TEST app_cmdline 00:16:09.855 ************************************ 00:16:09.855 12:15:05 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:16:09.855 * Looking for test storage... 00:16:09.855 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:16:09.855 12:15:05 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:09.855 12:15:05 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:16:09.855 12:15:05 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:09.855 12:15:05 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:09.855 12:15:05 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:09.855 12:15:05 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:09.855 12:15:05 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:09.855 12:15:05 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:16:09.855 12:15:05 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:16:09.855 12:15:05 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:16:09.855 12:15:05 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:16:09.855 12:15:05 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:16:09.855 12:15:05 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:16:09.855 12:15:05 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:16:09.855 12:15:05 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:09.855 12:15:05 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:16:09.855 12:15:05 app_cmdline -- scripts/common.sh@345 -- # : 1 00:16:09.855 12:15:05 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:09.855 12:15:05 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:09.855 12:15:05 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:16:09.855 12:15:05 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:16:09.855 12:15:05 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:09.855 12:15:05 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:16:09.855 12:15:05 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:16:09.855 12:15:05 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:16:09.855 12:15:05 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:16:09.855 12:15:05 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:09.855 12:15:05 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:16:09.855 12:15:05 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:16:09.855 12:15:05 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:09.855 12:15:05 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:09.855 12:15:05 app_cmdline -- scripts/common.sh@368 -- # return 0 00:16:09.855 12:15:05 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:09.855 12:15:05 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:09.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.855 --rc genhtml_branch_coverage=1 00:16:09.855 --rc genhtml_function_coverage=1 00:16:09.855 --rc genhtml_legend=1 00:16:09.855 --rc geninfo_all_blocks=1 00:16:09.855 --rc geninfo_unexecuted_blocks=1 00:16:09.855 00:16:09.855 ' 00:16:09.855 12:15:05 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:09.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.855 --rc genhtml_branch_coverage=1 00:16:09.855 --rc genhtml_function_coverage=1 00:16:09.855 --rc genhtml_legend=1 00:16:09.855 --rc geninfo_all_blocks=1 00:16:09.855 --rc geninfo_unexecuted_blocks=1 00:16:09.855 00:16:09.855 ' 00:16:09.855 12:15:05 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:09.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.855 --rc genhtml_branch_coverage=1 00:16:09.855 --rc genhtml_function_coverage=1 00:16:09.855 --rc genhtml_legend=1 00:16:09.855 --rc geninfo_all_blocks=1 00:16:09.855 --rc geninfo_unexecuted_blocks=1 00:16:09.855 00:16:09.855 ' 00:16:09.855 12:15:05 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:09.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.855 --rc genhtml_branch_coverage=1 00:16:09.855 --rc genhtml_function_coverage=1 00:16:09.855 --rc genhtml_legend=1 00:16:09.855 --rc geninfo_all_blocks=1 00:16:09.855 --rc geninfo_unexecuted_blocks=1 00:16:09.855 00:16:09.855 ' 00:16:09.855 12:15:05 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:16:09.855 12:15:05 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60990 00:16:09.855 12:15:05 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60990 00:16:09.855 12:15:05 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 60990 ']' 00:16:09.855 12:15:05 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.855 12:15:05 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:09.855 12:15:05 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:09.855 12:15:05 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:09.855 12:15:05 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:16:09.855 12:15:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:16:09.856 [2024-11-25 12:15:05.696615] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:16:09.856 [2024-11-25 12:15:05.696682] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60990 ] 00:16:10.113 [2024-11-25 12:15:05.837327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:10.113 [2024-11-25 12:15:05.873760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.679 12:15:06 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:10.679 12:15:06 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:16:10.679 12:15:06 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:16:10.936 { 00:16:10.936 "fields": { 00:16:10.936 "commit": "f1dd81af3", 00:16:10.936 "major": 25, 00:16:10.936 "minor": 1, 00:16:10.936 "patch": 0, 00:16:10.936 "suffix": "-pre" 00:16:10.936 }, 00:16:10.936 "version": "SPDK v25.01-pre git sha1 f1dd81af3" 00:16:10.936 } 00:16:10.936 12:15:06 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:16:10.936 12:15:06 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:16:10.936 12:15:06 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:16:10.936 12:15:06 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:16:10.936 12:15:06 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:16:10.936 12:15:06 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.936 12:15:06 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:16:10.936 12:15:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:16:10.936 12:15:06 app_cmdline -- app/cmdline.sh@26 -- # sort 00:16:10.936 12:15:06 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.936 12:15:06 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:16:10.936 12:15:06 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:16:10.936 12:15:06 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:16:10.936 12:15:06 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:16:10.936 12:15:06 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:16:10.936 12:15:06 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:10.936 12:15:06 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:10.936 12:15:06 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:10.936 12:15:06 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:10.936 12:15:06 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:10.936 12:15:06 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:10.936 12:15:06 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:10.936 12:15:06 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:10.936 12:15:06 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:16:11.194 2024/11/25 12:15:06 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:16:11.194 request: 00:16:11.194 { 00:16:11.194 "method": "env_dpdk_get_mem_stats", 00:16:11.194 "params": {} 00:16:11.194 } 00:16:11.194 Got JSON-RPC error response 00:16:11.194 GoRPCClient: error on JSON-RPC call 00:16:11.194 12:15:06 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:16:11.194 12:15:06 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:11.194 12:15:06 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:11.194 12:15:06 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:11.194 12:15:06 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60990 00:16:11.194 12:15:06 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 60990 ']' 00:16:11.194 12:15:06 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 60990 00:16:11.194 12:15:06 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:16:11.194 12:15:06 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:11.194 12:15:06 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60990 00:16:11.194 12:15:07 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:11.194 12:15:07 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:11.194 killing process with pid 60990 00:16:11.194 12:15:07 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60990' 00:16:11.194 12:15:07 app_cmdline -- common/autotest_common.sh@973 -- # kill 60990 00:16:11.194 12:15:07 app_cmdline -- common/autotest_common.sh@978 -- # wait 60990 00:16:11.452 00:16:11.452 real 0m1.687s 00:16:11.452 user 0m2.092s 00:16:11.452 sys 0m0.335s 00:16:11.452 12:15:07 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:11.452 ************************************ 00:16:11.452 END TEST app_cmdline 00:16:11.452 ************************************ 00:16:11.452 12:15:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:16:11.452 12:15:07 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:16:11.452 12:15:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:11.452 12:15:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:11.452 12:15:07 -- common/autotest_common.sh@10 -- # set +x 00:16:11.452 ************************************ 00:16:11.452 START TEST version 00:16:11.452 ************************************ 00:16:11.452 12:15:07 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:16:11.452 * Looking for test storage... 00:16:11.452 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:16:11.452 12:15:07 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:11.452 12:15:07 version -- common/autotest_common.sh@1693 -- # lcov --version 00:16:11.452 12:15:07 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:11.710 12:15:07 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:11.710 12:15:07 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:11.710 12:15:07 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:11.710 12:15:07 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:11.710 12:15:07 version -- scripts/common.sh@336 -- # IFS=.-: 00:16:11.710 12:15:07 version -- scripts/common.sh@336 -- # read -ra ver1 00:16:11.710 12:15:07 version -- scripts/common.sh@337 -- # IFS=.-: 00:16:11.710 12:15:07 version -- scripts/common.sh@337 -- # read -ra ver2 00:16:11.710 12:15:07 version -- scripts/common.sh@338 -- # local 'op=<' 00:16:11.710 12:15:07 version -- scripts/common.sh@340 -- # ver1_l=2 00:16:11.710 12:15:07 version -- scripts/common.sh@341 -- # ver2_l=1 00:16:11.710 12:15:07 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:11.710 12:15:07 version -- scripts/common.sh@344 -- # case "$op" in 00:16:11.710 12:15:07 version -- scripts/common.sh@345 -- # : 1 00:16:11.710 12:15:07 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:11.710 12:15:07 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:11.710 12:15:07 version -- scripts/common.sh@365 -- # decimal 1 00:16:11.710 12:15:07 version -- scripts/common.sh@353 -- # local d=1 00:16:11.710 12:15:07 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:11.710 12:15:07 version -- scripts/common.sh@355 -- # echo 1 00:16:11.710 12:15:07 version -- scripts/common.sh@365 -- # ver1[v]=1 00:16:11.710 12:15:07 version -- scripts/common.sh@366 -- # decimal 2 00:16:11.710 12:15:07 version -- scripts/common.sh@353 -- # local d=2 00:16:11.710 12:15:07 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:11.710 12:15:07 version -- scripts/common.sh@355 -- # echo 2 00:16:11.710 12:15:07 version -- scripts/common.sh@366 -- # ver2[v]=2 00:16:11.710 12:15:07 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:11.710 12:15:07 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:11.710 12:15:07 version -- scripts/common.sh@368 -- # return 0 00:16:11.710 12:15:07 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:11.710 12:15:07 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:11.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.710 --rc genhtml_branch_coverage=1 00:16:11.710 --rc genhtml_function_coverage=1 00:16:11.710 --rc genhtml_legend=1 00:16:11.710 --rc geninfo_all_blocks=1 00:16:11.710 --rc geninfo_unexecuted_blocks=1 00:16:11.710 00:16:11.710 ' 00:16:11.710 12:15:07 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:11.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.710 --rc genhtml_branch_coverage=1 00:16:11.710 --rc genhtml_function_coverage=1 00:16:11.710 --rc genhtml_legend=1 00:16:11.710 --rc geninfo_all_blocks=1 00:16:11.710 --rc geninfo_unexecuted_blocks=1 00:16:11.710 00:16:11.710 ' 00:16:11.710 12:15:07 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:11.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.710 --rc genhtml_branch_coverage=1 00:16:11.710 --rc genhtml_function_coverage=1 00:16:11.710 --rc genhtml_legend=1 00:16:11.710 --rc geninfo_all_blocks=1 00:16:11.710 --rc geninfo_unexecuted_blocks=1 00:16:11.710 00:16:11.710 ' 00:16:11.710 12:15:07 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:11.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.710 --rc genhtml_branch_coverage=1 00:16:11.710 --rc genhtml_function_coverage=1 00:16:11.710 --rc genhtml_legend=1 00:16:11.710 --rc geninfo_all_blocks=1 00:16:11.710 --rc geninfo_unexecuted_blocks=1 00:16:11.710 00:16:11.710 ' 00:16:11.710 12:15:07 version -- app/version.sh@17 -- # get_header_version major 00:16:11.710 12:15:07 version -- app/version.sh@14 -- # cut -f2 00:16:11.710 12:15:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:16:11.710 12:15:07 version -- app/version.sh@14 -- # tr -d '"' 00:16:11.710 12:15:07 version -- app/version.sh@17 -- # major=25 00:16:11.710 12:15:07 version -- app/version.sh@18 -- # get_header_version minor 00:16:11.710 12:15:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:16:11.710 12:15:07 version -- app/version.sh@14 -- # tr -d '"' 00:16:11.710 12:15:07 version -- app/version.sh@14 -- # cut -f2 00:16:11.710 12:15:07 version -- app/version.sh@18 -- # minor=1 00:16:11.710 12:15:07 version -- app/version.sh@19 -- # get_header_version patch 00:16:11.710 12:15:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:16:11.710 12:15:07 version -- app/version.sh@14 -- # cut -f2 00:16:11.710 12:15:07 version -- app/version.sh@14 -- # tr -d '"' 00:16:11.710 12:15:07 version -- app/version.sh@19 -- # patch=0 00:16:11.710 12:15:07 version -- app/version.sh@20 -- # get_header_version suffix 00:16:11.710 12:15:07 version -- app/version.sh@14 -- # cut -f2 00:16:11.710 12:15:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:16:11.710 12:15:07 version -- app/version.sh@14 -- # tr -d '"' 00:16:11.710 12:15:07 version -- app/version.sh@20 -- # suffix=-pre 00:16:11.710 12:15:07 version -- app/version.sh@22 -- # version=25.1 00:16:11.710 12:15:07 version -- app/version.sh@25 -- # (( patch != 0 )) 00:16:11.710 12:15:07 version -- app/version.sh@28 -- # version=25.1rc0 00:16:11.710 12:15:07 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:16:11.710 12:15:07 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:16:11.710 12:15:07 version -- app/version.sh@30 -- # py_version=25.1rc0 00:16:11.710 12:15:07 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:16:11.710 00:16:11.710 real 0m0.197s 00:16:11.710 user 0m0.117s 00:16:11.710 sys 0m0.106s 00:16:11.710 12:15:07 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:11.710 12:15:07 version -- common/autotest_common.sh@10 -- # set +x 00:16:11.710 ************************************ 00:16:11.710 END TEST version 00:16:11.710 ************************************ 00:16:11.710 12:15:07 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:16:11.710 12:15:07 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:16:11.710 12:15:07 -- spdk/autotest.sh@194 -- # uname -s 00:16:11.710 12:15:07 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:16:11.710 12:15:07 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:16:11.710 12:15:07 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:16:11.710 12:15:07 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:16:11.710 12:15:07 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:16:11.710 12:15:07 -- spdk/autotest.sh@260 -- # timing_exit lib 00:16:11.710 12:15:07 -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:11.710 12:15:07 -- common/autotest_common.sh@10 -- # set +x 00:16:11.710 12:15:07 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:16:11.710 12:15:07 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:16:11.710 12:15:07 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:16:11.710 12:15:07 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:16:11.710 12:15:07 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:16:11.711 12:15:07 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:16:11.711 12:15:07 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:16:11.711 12:15:07 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:11.711 12:15:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:11.711 12:15:07 -- common/autotest_common.sh@10 -- # set +x 00:16:11.711 ************************************ 00:16:11.711 START TEST nvmf_tcp 00:16:11.711 ************************************ 00:16:11.711 12:15:07 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:16:11.969 * Looking for test storage... 00:16:11.969 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:16:11.969 12:15:07 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:11.969 12:15:07 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:16:11.969 12:15:07 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:11.969 12:15:07 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:11.969 12:15:07 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:11.969 12:15:07 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:11.969 12:15:07 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:11.969 12:15:07 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:16:11.969 12:15:07 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:16:11.969 12:15:07 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:16:11.969 12:15:07 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:16:11.969 12:15:07 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:16:11.969 12:15:07 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:16:11.969 12:15:07 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:16:11.969 12:15:07 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:11.969 12:15:07 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:16:11.969 12:15:07 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:16:11.969 12:15:07 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:11.969 12:15:07 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:11.969 12:15:07 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:16:11.969 12:15:07 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:16:11.969 12:15:07 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:11.969 12:15:07 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:16:11.969 12:15:07 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:16:11.969 12:15:07 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:16:11.969 12:15:07 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:16:11.969 12:15:07 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:11.969 12:15:07 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:16:11.969 12:15:07 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:16:11.969 12:15:07 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:11.969 12:15:07 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:11.969 12:15:07 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:16:11.969 12:15:07 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:11.969 12:15:07 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:11.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.969 --rc genhtml_branch_coverage=1 00:16:11.969 --rc genhtml_function_coverage=1 00:16:11.969 --rc genhtml_legend=1 00:16:11.969 --rc geninfo_all_blocks=1 00:16:11.969 --rc geninfo_unexecuted_blocks=1 00:16:11.969 00:16:11.969 ' 00:16:11.969 12:15:07 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:11.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.969 --rc genhtml_branch_coverage=1 00:16:11.969 --rc genhtml_function_coverage=1 00:16:11.969 --rc genhtml_legend=1 00:16:11.969 --rc geninfo_all_blocks=1 00:16:11.969 --rc geninfo_unexecuted_blocks=1 00:16:11.969 00:16:11.969 ' 00:16:11.969 12:15:07 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:11.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.969 --rc genhtml_branch_coverage=1 00:16:11.969 --rc genhtml_function_coverage=1 00:16:11.969 --rc genhtml_legend=1 00:16:11.969 --rc geninfo_all_blocks=1 00:16:11.969 --rc geninfo_unexecuted_blocks=1 00:16:11.969 00:16:11.969 ' 00:16:11.969 12:15:07 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:11.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.969 --rc genhtml_branch_coverage=1 00:16:11.969 --rc genhtml_function_coverage=1 00:16:11.969 --rc genhtml_legend=1 00:16:11.969 --rc geninfo_all_blocks=1 00:16:11.969 --rc geninfo_unexecuted_blocks=1 00:16:11.969 00:16:11.969 ' 00:16:11.969 12:15:07 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:16:11.969 12:15:07 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:16:11.969 12:15:07 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:16:11.969 12:15:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:11.969 12:15:07 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:11.969 12:15:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:11.969 ************************************ 00:16:11.969 START TEST nvmf_target_core 00:16:11.969 ************************************ 00:16:11.969 12:15:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:16:11.969 * Looking for test storage... 00:16:11.969 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:16:11.969 12:15:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:11.969 12:15:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:16:11.969 12:15:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:11.969 12:15:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:11.969 12:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:11.969 12:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:11.969 12:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:11.969 12:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:16:11.969 12:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:16:11.969 12:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:16:11.969 12:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:16:11.969 12:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:16:11.969 12:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:16:11.969 12:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:16:11.969 12:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:11.969 12:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:16:11.969 12:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:16:11.969 12:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:11.969 12:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:11.969 12:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:16:11.969 12:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:16:11.969 12:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:11.969 12:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:16:11.969 12:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:16:11.969 12:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:16:11.969 12:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:16:11.969 12:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:11.969 12:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:16:11.969 12:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:16:11.969 12:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:11.969 12:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:11.969 12:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:16:11.969 12:15:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:11.969 12:15:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:11.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.969 --rc genhtml_branch_coverage=1 00:16:11.969 --rc genhtml_function_coverage=1 00:16:11.969 --rc genhtml_legend=1 00:16:11.969 --rc geninfo_all_blocks=1 00:16:11.969 --rc geninfo_unexecuted_blocks=1 00:16:11.969 00:16:11.969 ' 00:16:11.969 12:15:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:11.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.969 --rc genhtml_branch_coverage=1 00:16:11.969 --rc genhtml_function_coverage=1 00:16:11.969 --rc genhtml_legend=1 00:16:11.969 --rc geninfo_all_blocks=1 00:16:11.969 --rc geninfo_unexecuted_blocks=1 00:16:11.969 00:16:11.969 ' 00:16:11.969 12:15:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:11.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.969 --rc genhtml_branch_coverage=1 00:16:11.969 --rc genhtml_function_coverage=1 00:16:11.969 --rc genhtml_legend=1 00:16:11.969 --rc geninfo_all_blocks=1 00:16:11.969 --rc geninfo_unexecuted_blocks=1 00:16:11.969 00:16:11.969 ' 00:16:11.969 12:15:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:11.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:11.969 --rc genhtml_branch_coverage=1 00:16:11.969 --rc genhtml_function_coverage=1 00:16:11.969 --rc genhtml_legend=1 00:16:11.969 --rc geninfo_all_blocks=1 00:16:11.969 --rc geninfo_unexecuted_blocks=1 00:16:11.969 00:16:11.969 ' 00:16:12.228 12:15:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:16:12.228 12:15:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:16:12.228 12:15:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:12.228 12:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:16:12.228 12:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:12.228 12:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:12.228 12:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:12.228 12:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:12.228 12:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:12.228 12:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:12.228 12:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:12.228 12:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:12.228 12:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:12.228 12:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:12.228 12:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:16:12.228 12:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:12.229 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:16:12.229 ************************************ 00:16:12.229 START TEST nvmf_abort 00:16:12.229 ************************************ 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:16:12.229 * Looking for test storage... 00:16:12.229 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:16:12.229 12:15:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:12.229 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:12.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.229 --rc genhtml_branch_coverage=1 00:16:12.229 --rc genhtml_function_coverage=1 00:16:12.229 --rc genhtml_legend=1 00:16:12.229 --rc geninfo_all_blocks=1 00:16:12.229 --rc geninfo_unexecuted_blocks=1 00:16:12.229 00:16:12.229 ' 00:16:12.229 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:12.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.229 --rc genhtml_branch_coverage=1 00:16:12.229 --rc genhtml_function_coverage=1 00:16:12.229 --rc genhtml_legend=1 00:16:12.229 --rc geninfo_all_blocks=1 00:16:12.229 --rc geninfo_unexecuted_blocks=1 00:16:12.229 00:16:12.229 ' 00:16:12.229 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:12.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.229 --rc genhtml_branch_coverage=1 00:16:12.230 --rc genhtml_function_coverage=1 00:16:12.230 --rc genhtml_legend=1 00:16:12.230 --rc geninfo_all_blocks=1 00:16:12.230 --rc geninfo_unexecuted_blocks=1 00:16:12.230 00:16:12.230 ' 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:12.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:12.230 --rc genhtml_branch_coverage=1 00:16:12.230 --rc genhtml_function_coverage=1 00:16:12.230 --rc genhtml_legend=1 00:16:12.230 --rc geninfo_all_blocks=1 00:16:12.230 --rc geninfo_unexecuted_blocks=1 00:16:12.230 00:16:12.230 ' 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:12.230 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:12.230 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:12.231 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:12.231 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:12.231 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:12.231 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:12.231 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:12.231 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:12.231 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:12.231 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:12.231 Cannot find device "nvmf_init_br" 00:16:12.231 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # true 00:16:12.231 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:12.231 Cannot find device "nvmf_init_br2" 00:16:12.231 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # true 00:16:12.231 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:12.231 Cannot find device "nvmf_tgt_br" 00:16:12.231 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@164 -- # true 00:16:12.231 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:12.231 Cannot find device "nvmf_tgt_br2" 00:16:12.231 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@165 -- # true 00:16:12.231 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:12.231 Cannot find device "nvmf_init_br" 00:16:12.231 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@166 -- # true 00:16:12.231 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:12.231 Cannot find device "nvmf_init_br2" 00:16:12.231 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@167 -- # true 00:16:12.231 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:12.231 Cannot find device "nvmf_tgt_br" 00:16:12.231 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@168 -- # true 00:16:12.231 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:12.489 Cannot find device "nvmf_tgt_br2" 00:16:12.489 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # true 00:16:12.489 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:12.489 Cannot find device "nvmf_br" 00:16:12.489 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@170 -- # true 00:16:12.489 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:12.489 Cannot find device "nvmf_init_if" 00:16:12.489 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # true 00:16:12.489 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:12.489 Cannot find device "nvmf_init_if2" 00:16:12.489 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # true 00:16:12.489 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:12.489 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:12.489 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@173 -- # true 00:16:12.489 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:12.489 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:12.489 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@174 -- # true 00:16:12.489 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:12.489 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:12.489 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:12.489 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:12.489 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:12.489 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:12.489 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:12.489 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:12.489 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:12.489 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:12.489 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:12.489 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:12.489 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:12.489 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:12.489 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:12.489 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:12.489 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:12.489 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:12.489 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:12.489 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:12.489 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:12.489 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:12.489 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:12.489 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:12.489 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:12.489 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:12.489 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:12.489 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:12.747 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:12.747 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:12.747 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:12.747 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:12.748 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:12.748 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:12.748 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:16:12.748 00:16:12.748 --- 10.0.0.3 ping statistics --- 00:16:12.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:12.748 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:16:12.748 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:12.748 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:12.748 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.036 ms 00:16:12.748 00:16:12.748 --- 10.0.0.4 ping statistics --- 00:16:12.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:12.748 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:16:12.748 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:12.748 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:12.748 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:16:12.748 00:16:12.748 --- 10.0.0.1 ping statistics --- 00:16:12.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:12.748 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:16:12.748 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:12.748 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:12.748 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:16:12.748 00:16:12.748 --- 10.0.0.2 ping statistics --- 00:16:12.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:12.748 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:16:12.748 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:12.748 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@461 -- # return 0 00:16:12.748 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:12.748 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:12.748 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:12.748 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:12.748 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:12.748 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:12.748 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:12.748 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:16:12.748 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:12.748 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:12.748 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:16:12.748 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=61419 00:16:12.748 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 61419 00:16:12.748 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:12.748 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 61419 ']' 00:16:12.748 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:12.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:12.748 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:12.748 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:12.748 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:12.748 12:15:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:16:12.748 [2024-11-25 12:15:08.469647] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:16:12.748 [2024-11-25 12:15:08.470014] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:12.748 [2024-11-25 12:15:08.611281] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:13.006 [2024-11-25 12:15:08.648780] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:13.006 [2024-11-25 12:15:08.648820] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:13.006 [2024-11-25 12:15:08.648827] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:13.006 [2024-11-25 12:15:08.648831] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:13.006 [2024-11-25 12:15:08.648836] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:13.006 [2024-11-25 12:15:08.649561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:13.006 [2024-11-25 12:15:08.650221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:13.006 [2024-11-25 12:15:08.650340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:13.569 12:15:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:13.569 12:15:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:16:13.569 12:15:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:13.569 12:15:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:13.569 12:15:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:16:13.569 12:15:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:13.569 12:15:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:16:13.569 12:15:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.569 12:15:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:16:13.569 [2024-11-25 12:15:09.385051] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:13.569 12:15:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.569 12:15:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:16:13.569 12:15:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.569 12:15:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:16:13.569 Malloc0 00:16:13.569 12:15:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.569 12:15:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:13.569 12:15:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.569 12:15:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:16:13.569 Delay0 00:16:13.569 12:15:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.569 12:15:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:13.569 12:15:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.569 12:15:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:16:13.569 12:15:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.569 12:15:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:16:13.569 12:15:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.569 12:15:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:16:13.569 12:15:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.569 12:15:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:16:13.569 12:15:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.569 12:15:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:16:13.569 [2024-11-25 12:15:09.443478] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:13.827 12:15:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.827 12:15:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:16:13.827 12:15:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.827 12:15:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:16:13.827 12:15:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.827 12:15:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:16:13.827 [2024-11-25 12:15:09.617616] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:16.363 Initializing NVMe Controllers 00:16:16.363 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:16:16.363 controller IO queue size 128 less than required 00:16:16.363 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:16:16.363 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:16:16.363 Initialization complete. Launching workers. 00:16:16.363 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 31247 00:16:16.364 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 31308, failed to submit 62 00:16:16.364 success 31251, unsuccessful 57, failed 0 00:16:16.364 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:16.364 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.364 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:16:16.364 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.364 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:16:16.364 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:16:16.364 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:16.364 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:16:16.364 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:16.364 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:16:16.364 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:16.364 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:16.364 rmmod nvme_tcp 00:16:16.364 rmmod nvme_fabrics 00:16:16.364 rmmod nvme_keyring 00:16:16.364 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:16.364 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:16:16.364 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:16:16.364 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 61419 ']' 00:16:16.364 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 61419 00:16:16.364 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 61419 ']' 00:16:16.364 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 61419 00:16:16.364 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:16:16.364 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:16.364 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61419 00:16:16.364 killing process with pid 61419 00:16:16.364 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:16.364 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:16.364 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61419' 00:16:16.364 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 61419 00:16:16.364 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 61419 00:16:16.364 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:16.364 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:16.364 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:16.364 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:16:16.364 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:16:16.364 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:16.364 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:16:16.364 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:16.364 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:16.364 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:16.364 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:16.364 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:16.364 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:16.364 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:16.364 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:16.364 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:16.364 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:16.364 12:15:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:16.364 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:16.364 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:16.364 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:16.364 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:16.364 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:16.364 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:16.364 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:16.364 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:16.364 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@300 -- # return 0 00:16:16.364 00:16:16.364 real 0m4.292s 00:16:16.364 user 0m11.899s 00:16:16.364 sys 0m0.840s 00:16:16.364 ************************************ 00:16:16.364 END TEST nvmf_abort 00:16:16.364 ************************************ 00:16:16.364 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:16.364 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:16:16.364 12:15:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:16:16.364 12:15:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:16.364 12:15:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:16.364 12:15:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:16:16.364 ************************************ 00:16:16.364 START TEST nvmf_ns_hotplug_stress 00:16:16.364 ************************************ 00:16:16.364 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:16:16.624 * Looking for test storage... 00:16:16.624 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:16.624 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:16.624 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:16:16.624 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:16.624 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:16.624 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:16.624 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:16.624 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:16.624 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:16:16.624 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:16:16.624 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:16:16.624 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:16:16.624 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:16:16.624 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:16:16.624 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:16:16.624 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:16.624 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:16:16.624 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:16:16.624 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:16.624 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:16.624 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:16:16.624 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:16:16.624 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:16.624 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:16:16.624 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:16:16.624 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:16:16.624 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:16:16.624 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:16.624 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:16:16.625 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:16:16.625 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:16.625 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:16.625 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:16:16.625 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:16.625 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:16.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.625 --rc genhtml_branch_coverage=1 00:16:16.625 --rc genhtml_function_coverage=1 00:16:16.625 --rc genhtml_legend=1 00:16:16.625 --rc geninfo_all_blocks=1 00:16:16.625 --rc geninfo_unexecuted_blocks=1 00:16:16.625 00:16:16.625 ' 00:16:16.625 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:16.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.625 --rc genhtml_branch_coverage=1 00:16:16.625 --rc genhtml_function_coverage=1 00:16:16.625 --rc genhtml_legend=1 00:16:16.625 --rc geninfo_all_blocks=1 00:16:16.625 --rc geninfo_unexecuted_blocks=1 00:16:16.625 00:16:16.625 ' 00:16:16.625 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:16.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.625 --rc genhtml_branch_coverage=1 00:16:16.625 --rc genhtml_function_coverage=1 00:16:16.625 --rc genhtml_legend=1 00:16:16.625 --rc geninfo_all_blocks=1 00:16:16.625 --rc geninfo_unexecuted_blocks=1 00:16:16.625 00:16:16.625 ' 00:16:16.625 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:16.625 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:16.625 --rc genhtml_branch_coverage=1 00:16:16.625 --rc genhtml_function_coverage=1 00:16:16.625 --rc genhtml_legend=1 00:16:16.625 --rc geninfo_all_blocks=1 00:16:16.625 --rc geninfo_unexecuted_blocks=1 00:16:16.625 00:16:16.625 ' 00:16:16.625 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:16.625 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:16:16.625 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:16.625 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:16.625 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:16.625 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:16.625 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:16.625 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:16.625 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:16.625 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:16.625 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:16.625 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:16.625 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:16:16.625 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:16:16.625 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:16.625 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:16.625 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:16.625 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:16.625 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:16.625 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:16:16.625 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:16.625 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:16.625 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:16.625 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.625 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.625 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.625 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:16:16.625 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.625 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:16:16.625 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:16.625 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:16.625 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:16.625 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:16.625 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:16.625 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:16.625 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:16.625 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:16.625 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:16.625 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:16.625 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:16.625 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:16:16.626 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:16.626 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:16.626 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:16.626 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:16.626 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:16.626 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:16.626 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:16.626 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:16.626 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:16.626 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:16.626 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:16.626 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:16.626 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:16.626 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:16.626 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:16.626 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:16.626 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:16.626 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:16.626 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:16.626 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:16.626 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:16.626 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:16.626 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:16.626 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:16.626 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:16.626 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:16.626 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:16.626 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:16.626 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:16.626 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:16.626 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:16.626 Cannot find device "nvmf_init_br" 00:16:16.626 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:16:16.626 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:16.626 Cannot find device "nvmf_init_br2" 00:16:16.626 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:16:16.626 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:16.626 Cannot find device "nvmf_tgt_br" 00:16:16.626 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # true 00:16:16.626 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:16.626 Cannot find device "nvmf_tgt_br2" 00:16:16.626 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # true 00:16:16.626 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:16.626 Cannot find device "nvmf_init_br" 00:16:16.626 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # true 00:16:16.626 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:16.626 Cannot find device "nvmf_init_br2" 00:16:16.626 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # true 00:16:16.626 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:16.626 Cannot find device "nvmf_tgt_br" 00:16:16.626 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # true 00:16:16.626 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:16.626 Cannot find device "nvmf_tgt_br2" 00:16:16.626 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # true 00:16:16.626 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:16.626 Cannot find device "nvmf_br" 00:16:16.626 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # true 00:16:16.626 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:16.884 Cannot find device "nvmf_init_if" 00:16:16.884 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # true 00:16:16.884 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:16.884 Cannot find device "nvmf_init_if2" 00:16:16.885 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # true 00:16:16.885 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:16.885 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:16.885 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # true 00:16:16.885 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:16.885 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:16.885 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # true 00:16:16.885 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:16.885 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:16.885 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:16.885 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:16.885 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:16.885 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:16.885 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:16.885 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:16.885 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:16.885 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:16.885 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:16.885 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:16.885 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:16.885 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:16.885 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:16.885 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:16.885 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:16.885 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:16.885 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:16.885 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:16.885 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:16.885 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:16.885 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:16.885 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:16.885 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:16.885 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:16.885 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:16.885 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:16.885 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:16.885 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:16.885 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:16.885 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:16.885 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:16.885 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:16.885 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:16:16.885 00:16:16.885 --- 10.0.0.3 ping statistics --- 00:16:16.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.885 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:16:16.885 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:16.885 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:16.885 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:16:16.885 00:16:16.885 --- 10.0.0.4 ping statistics --- 00:16:16.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.885 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:16:16.885 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:16.885 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:16.885 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:16:16.885 00:16:16.885 --- 10.0.0.1 ping statistics --- 00:16:16.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.885 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:16:16.885 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:16.885 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:16.885 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:16:16.885 00:16:16.885 --- 10.0.0.2 ping statistics --- 00:16:16.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.885 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:16:16.885 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:16.885 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@461 -- # return 0 00:16:16.885 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:16.885 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:16.885 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:16.885 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:16.885 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:16.885 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:16.885 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:17.223 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:16:17.223 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:17.223 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:17.223 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:16:17.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.223 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=61728 00:16:17.223 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 61728 00:16:17.223 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 61728 ']' 00:16:17.223 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.223 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:17.223 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.223 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:17.223 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:16:17.223 12:15:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:17.223 [2024-11-25 12:15:12.807815] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:16:17.223 [2024-11-25 12:15:12.807883] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:17.223 [2024-11-25 12:15:12.948596] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:17.223 [2024-11-25 12:15:12.985428] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:17.223 [2024-11-25 12:15:12.985632] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:17.223 [2024-11-25 12:15:12.985644] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:17.223 [2024-11-25 12:15:12.985649] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:17.223 [2024-11-25 12:15:12.985654] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:17.223 [2024-11-25 12:15:12.986477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:17.223 [2024-11-25 12:15:12.986817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:17.223 [2024-11-25 12:15:12.986839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:18.172 12:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:18.172 12:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:16:18.172 12:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:18.172 12:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:18.172 12:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:16:18.172 12:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:18.172 12:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:16:18.172 12:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:18.172 [2024-11-25 12:15:13.947639] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:18.172 12:15:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:18.435 12:15:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:18.694 [2024-11-25 12:15:14.380578] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:18.694 12:15:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:16:18.951 12:15:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:16:18.951 Malloc0 00:16:18.951 12:15:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:19.208 Delay0 00:16:19.208 12:15:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:19.466 12:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:16:19.466 NULL1 00:16:19.466 12:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:16:19.724 12:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=61852 00:16:19.724 12:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:16:19.724 12:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61852 00:16:19.724 12:15:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:21.097 Read completed with error (sct=0, sc=11) 00:16:21.097 12:15:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:21.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:21.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:21.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:21.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:21.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:21.097 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:21.365 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:21.365 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:16:21.365 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:16:21.365 true 00:16:21.624 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61852 00:16:21.624 12:15:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:22.189 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:22.446 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:16:22.446 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:16:22.703 true 00:16:22.703 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61852 00:16:22.703 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:22.960 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:22.960 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:16:22.960 12:15:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:16:23.216 true 00:16:23.216 12:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61852 00:16:23.216 12:15:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:24.585 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:24.585 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:16:24.585 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:16:24.585 true 00:16:24.843 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61852 00:16:24.843 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:24.843 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:25.100 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:16:25.100 12:15:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:16:25.358 true 00:16:25.358 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61852 00:16:25.358 12:15:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:26.291 12:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:26.550 12:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:16:26.550 12:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:16:26.808 true 00:16:26.808 12:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61852 00:16:26.808 12:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:27.065 12:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:27.065 12:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:16:27.065 12:15:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:16:27.322 true 00:16:27.322 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61852 00:16:27.322 12:15:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:28.255 12:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:28.512 12:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:16:28.512 12:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:16:28.769 true 00:16:28.769 12:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61852 00:16:28.769 12:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:29.025 12:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:29.283 12:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:16:29.283 12:15:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:16:29.283 true 00:16:29.283 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61852 00:16:29.283 12:15:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:30.214 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:30.472 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:30.472 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:16:30.472 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:16:30.730 true 00:16:30.730 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61852 00:16:30.730 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:30.987 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:31.246 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:16:31.246 12:15:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:16:31.246 true 00:16:31.504 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61852 00:16:31.504 12:15:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:32.439 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:32.439 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:16:32.439 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:16:32.696 true 00:16:32.696 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61852 00:16:32.696 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:32.955 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:33.212 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:16:33.212 12:15:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:16:33.212 true 00:16:33.486 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61852 00:16:33.486 12:15:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:34.427 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:34.427 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:16:34.427 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:16:34.684 true 00:16:34.684 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61852 00:16:34.684 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:34.941 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:35.198 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:16:35.198 12:15:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:16:35.456 true 00:16:35.456 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61852 00:16:35.456 12:15:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:36.416 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:36.674 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:16:36.674 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:16:36.674 true 00:16:36.674 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61852 00:16:36.674 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:36.931 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:37.190 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:16:37.190 12:15:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:16:37.448 true 00:16:37.448 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61852 00:16:37.448 12:15:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:38.382 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:38.638 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:16:38.638 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:16:38.895 true 00:16:38.895 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61852 00:16:38.895 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:39.154 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:39.154 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:16:39.154 12:15:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:16:39.413 true 00:16:39.413 12:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61852 00:16:39.413 12:15:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:40.346 12:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:40.605 12:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:16:40.605 12:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:16:40.862 true 00:16:40.862 12:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61852 00:16:40.863 12:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:41.121 12:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:41.121 12:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:16:41.121 12:15:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:16:41.379 true 00:16:41.379 12:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61852 00:16:41.379 12:15:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:42.311 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:42.569 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:16:42.569 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:16:42.827 true 00:16:42.827 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61852 00:16:42.827 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:43.085 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:43.085 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:16:43.085 12:15:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:16:43.342 true 00:16:43.342 12:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61852 00:16:43.342 12:15:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:44.278 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:44.536 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:16:44.536 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:16:44.795 true 00:16:44.795 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61852 00:16:44.796 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:45.054 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:45.311 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:16:45.311 12:15:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:16:45.311 true 00:16:45.311 12:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61852 00:16:45.311 12:15:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:46.682 12:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:46.682 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:46.682 12:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:16:46.682 12:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:16:46.939 true 00:16:46.939 12:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61852 00:16:46.939 12:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:46.939 12:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:47.197 12:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:16:47.197 12:15:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:16:47.454 true 00:16:47.454 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61852 00:16:47.455 12:15:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:48.388 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:48.646 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:16:48.646 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:16:48.903 true 00:16:48.903 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61852 00:16:48.903 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:48.903 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:49.162 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:16:49.162 12:15:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:16:49.420 true 00:16:49.420 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61852 00:16:49.420 12:15:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:50.354 Initializing NVMe Controllers 00:16:50.354 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:50.354 Controller IO queue size 128, less than required. 00:16:50.354 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:50.354 Controller IO queue size 128, less than required. 00:16:50.354 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:50.354 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:50.354 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:50.354 Initialization complete. Launching workers. 00:16:50.354 ======================================================== 00:16:50.354 Latency(us) 00:16:50.354 Device Information : IOPS MiB/s Average min max 00:16:50.354 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 373.67 0.18 195558.61 2864.63 1017116.98 00:16:50.354 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 14650.47 7.15 8736.97 2589.74 572693.66 00:16:50.354 ======================================================== 00:16:50.354 Total : 15024.13 7.34 13383.42 2589.74 1017116.98 00:16:50.354 00:16:50.354 12:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:50.612 12:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:16:50.613 12:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:16:50.871 true 00:16:50.871 12:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 61852 00:16:50.871 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (61852) - No such process 00:16:50.871 12:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 61852 00:16:50.871 12:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:51.127 12:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:51.127 12:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:16:51.127 12:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:16:51.127 12:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:16:51.127 12:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:51.127 12:15:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:16:51.383 null0 00:16:51.383 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:51.383 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:51.383 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:16:51.639 null1 00:16:51.639 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:51.639 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:51.639 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:16:51.895 null2 00:16:51.895 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:51.895 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:51.895 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:16:51.895 null3 00:16:51.895 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:51.895 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:51.895 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:16:52.151 null4 00:16:52.151 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:52.151 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:52.151 12:15:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:16:52.425 null5 00:16:52.425 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:52.425 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:52.425 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:16:52.425 null6 00:16:52.425 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:52.425 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:52.425 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:16:52.683 null7 00:16:52.683 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:52.683 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:52.683 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:16:52.683 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:52.683 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:16:52.683 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:52.683 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:16:52.683 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:52.683 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:52.683 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:52.683 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:52.683 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:52.683 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:52.683 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:16:52.683 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:52.683 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:52.683 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:16:52.683 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:52.683 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:52.683 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:52.683 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:52.683 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:16:52.683 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:52.683 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:52.683 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:16:52.683 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:52.683 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:52.683 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:52.683 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:16:52.683 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:52.683 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:52.683 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:52.683 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:16:52.683 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:52.683 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:52.683 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:52.683 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:52.683 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:16:52.683 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:52.683 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:16:52.683 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:52.683 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:52.683 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:52.684 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:52.684 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:16:52.684 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:52.684 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:52.684 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:16:52.684 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:52.684 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:52.684 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:52.684 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:52.684 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:52.684 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:16:52.684 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:52.684 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:16:52.684 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:52.684 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:52.684 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:52.684 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:52.684 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 62906 62908 62909 62911 62913 62915 62918 62919 00:16:52.684 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:52.684 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:16:52.684 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:16:52.684 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:52.684 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:52.684 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:52.684 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:52.684 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:52.941 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:52.941 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:52.941 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:52.941 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:52.941 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:52.941 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:52.941 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:52.941 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:53.199 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:53.199 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:53.199 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:53.199 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:53.199 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:53.199 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:53.199 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:53.199 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:53.199 12:15:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:53.199 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:53.199 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:53.199 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:53.199 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:53.199 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:53.199 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:53.199 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:53.199 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:53.199 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:53.199 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:53.199 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:53.199 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:53.199 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:53.199 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:53.199 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:53.199 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:53.456 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:53.456 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:53.456 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:53.456 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:53.456 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:53.456 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:53.456 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:53.456 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:53.456 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:53.456 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:53.456 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:53.456 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:53.456 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:53.713 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:53.713 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:53.713 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:53.713 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:53.713 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:53.713 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:53.713 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:53.713 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:53.713 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:53.713 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:53.713 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:53.713 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:53.713 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:53.713 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:53.713 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:53.713 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:53.713 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:53.713 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:53.713 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:53.713 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:53.970 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:53.970 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:53.970 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:53.970 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:53.970 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:53.970 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:53.970 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:53.970 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:53.970 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:53.970 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:53.970 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:53.970 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:53.970 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:53.970 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:53.970 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:54.228 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:54.229 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:54.229 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:54.229 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:54.229 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:54.229 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:54.229 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:54.229 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:54.229 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:54.229 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:54.229 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:54.229 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:54.229 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:54.229 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:54.229 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:54.229 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:54.229 12:15:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:54.229 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:54.486 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:54.486 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:54.486 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:54.486 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:54.486 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:54.486 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:54.486 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:54.486 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:54.486 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:54.486 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:54.486 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:54.486 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:54.486 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:54.486 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:54.486 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:54.486 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:54.486 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:54.743 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:54.743 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:54.743 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:54.743 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:54.743 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:54.743 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:54.743 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:54.743 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:54.743 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:54.743 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:54.743 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:54.743 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:54.743 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:54.743 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:54.743 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:54.743 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:54.743 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:54.743 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:54.743 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:54.743 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:55.002 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:55.002 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:55.002 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:55.002 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:55.002 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:55.002 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:55.002 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:55.002 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:55.002 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:55.002 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:55.002 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:55.002 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:55.002 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:55.002 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:55.002 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:55.002 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:55.002 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:55.002 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:55.002 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:55.260 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:55.260 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:55.260 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:55.260 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:55.260 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:55.260 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:55.260 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:55.260 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:55.260 12:15:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:55.260 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:55.260 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:55.260 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:55.260 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:55.260 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:55.260 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:55.601 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:55.601 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:55.601 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:55.601 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:55.601 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:55.601 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:55.601 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:55.601 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:55.601 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:55.601 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:55.602 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:55.602 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:55.602 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:55.602 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:55.602 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:55.602 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:55.602 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:55.602 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:55.602 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:55.602 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:55.602 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:55.602 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:55.602 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:55.602 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:55.602 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:55.602 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:55.602 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:55.602 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:55.602 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:55.860 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:55.860 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:55.860 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:55.860 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:55.860 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:55.860 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:55.860 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:55.860 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:55.860 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:55.860 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:55.860 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:55.860 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:55.860 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:55.860 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:55.860 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:55.860 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:55.860 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:55.860 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:55.860 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:56.118 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:56.118 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:56.118 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:56.118 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:56.118 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:56.118 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:56.118 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:56.118 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:56.118 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:56.118 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:56.118 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:56.118 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:56.118 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:56.118 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:56.118 12:15:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:56.376 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:56.376 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:56.376 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:56.376 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:56.376 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:56.376 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:56.376 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:56.376 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:56.376 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:56.376 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:56.376 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:56.376 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:56.376 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:56.376 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:56.376 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:56.376 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:56.376 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:56.376 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:56.376 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:56.376 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:56.376 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:56.376 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:56.634 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:56.635 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:56.635 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:56.635 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:56.635 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:56.635 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:56.635 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:56.635 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:56.635 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:56.635 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:56.635 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:56.635 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:56.635 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:56.635 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:56.892 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:56.892 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:56.892 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:56.892 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:56.892 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:56.892 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:56.892 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:56.892 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:56.892 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:56.892 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:56.892 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:56.892 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:56.892 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:56.892 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:56.892 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:56.892 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:56.892 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:56.892 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:56.892 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:56.892 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:57.151 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:57.151 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:57.151 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:57.151 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:57.151 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:57.151 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:57.151 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:57.151 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:57.151 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:57.151 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:57.151 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:57.151 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:57.151 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:57.151 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:57.151 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:57.151 12:15:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:57.409 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:57.409 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:57.409 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:57.409 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:57.409 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:57.409 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:57.409 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:57.409 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:57.409 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:16:57.409 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:57.409 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:16:57.409 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:57.409 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:16:57.409 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:57.409 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:57.409 rmmod nvme_tcp 00:16:57.409 rmmod nvme_fabrics 00:16:57.409 rmmod nvme_keyring 00:16:57.409 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:57.409 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:16:57.409 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:16:57.409 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 61728 ']' 00:16:57.409 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 61728 00:16:57.409 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 61728 ']' 00:16:57.409 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 61728 00:16:57.409 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:16:57.409 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:57.666 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61728 00:16:57.666 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:57.666 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:57.666 killing process with pid 61728 00:16:57.666 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61728' 00:16:57.666 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 61728 00:16:57.666 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 61728 00:16:57.666 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:57.666 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:57.666 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:57.666 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:16:57.666 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:16:57.666 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:16:57.666 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:57.666 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:57.666 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:57.666 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:57.666 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:57.666 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:57.666 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:57.666 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:57.666 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:57.666 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:57.666 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:57.666 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:57.924 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:57.924 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:57.924 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:57.924 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:57.924 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:57.924 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:57.924 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:57.924 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:57.924 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@300 -- # return 0 00:16:57.924 00:16:57.924 real 0m41.430s 00:16:57.924 user 3m14.624s 00:16:57.924 sys 0m9.881s 00:16:57.924 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:57.924 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:16:57.924 ************************************ 00:16:57.924 END TEST nvmf_ns_hotplug_stress 00:16:57.924 ************************************ 00:16:57.924 12:15:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:16:57.924 12:15:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:57.924 12:15:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:57.924 12:15:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:16:57.924 ************************************ 00:16:57.924 START TEST nvmf_delete_subsystem 00:16:57.924 ************************************ 00:16:57.924 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:16:57.924 * Looking for test storage... 00:16:57.924 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:57.924 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:57.924 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:16:57.924 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:58.184 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:58.184 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:58.184 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:58.184 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:58.184 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:16:58.184 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:16:58.184 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:16:58.184 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:16:58.184 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:16:58.184 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:16:58.184 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:16:58.184 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:58.184 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:16:58.184 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:16:58.184 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:58.184 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:58.184 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:16:58.184 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:16:58.184 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:58.184 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:16:58.184 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:16:58.184 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:16:58.184 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:16:58.184 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:58.184 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:16:58.184 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:16:58.184 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:58.184 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:58.184 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:16:58.184 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:58.184 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:58.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.184 --rc genhtml_branch_coverage=1 00:16:58.184 --rc genhtml_function_coverage=1 00:16:58.184 --rc genhtml_legend=1 00:16:58.184 --rc geninfo_all_blocks=1 00:16:58.184 --rc geninfo_unexecuted_blocks=1 00:16:58.184 00:16:58.184 ' 00:16:58.184 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:58.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.184 --rc genhtml_branch_coverage=1 00:16:58.184 --rc genhtml_function_coverage=1 00:16:58.184 --rc genhtml_legend=1 00:16:58.184 --rc geninfo_all_blocks=1 00:16:58.184 --rc geninfo_unexecuted_blocks=1 00:16:58.184 00:16:58.184 ' 00:16:58.184 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:58.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.184 --rc genhtml_branch_coverage=1 00:16:58.184 --rc genhtml_function_coverage=1 00:16:58.184 --rc genhtml_legend=1 00:16:58.184 --rc geninfo_all_blocks=1 00:16:58.184 --rc geninfo_unexecuted_blocks=1 00:16:58.184 00:16:58.184 ' 00:16:58.184 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:58.184 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:58.184 --rc genhtml_branch_coverage=1 00:16:58.184 --rc genhtml_function_coverage=1 00:16:58.184 --rc genhtml_legend=1 00:16:58.184 --rc geninfo_all_blocks=1 00:16:58.184 --rc geninfo_unexecuted_blocks=1 00:16:58.184 00:16:58.184 ' 00:16:58.184 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:58.184 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:16:58.184 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:58.184 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:58.184 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:58.184 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:58.184 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:58.184 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:58.184 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:58.185 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:58.185 Cannot find device "nvmf_init_br" 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:16:58.185 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:58.185 Cannot find device "nvmf_init_br2" 00:16:58.186 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:16:58.186 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:58.186 Cannot find device "nvmf_tgt_br" 00:16:58.186 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # true 00:16:58.186 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:58.186 Cannot find device "nvmf_tgt_br2" 00:16:58.186 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # true 00:16:58.186 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:58.186 Cannot find device "nvmf_init_br" 00:16:58.186 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # true 00:16:58.186 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:58.186 Cannot find device "nvmf_init_br2" 00:16:58.186 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # true 00:16:58.186 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:58.186 Cannot find device "nvmf_tgt_br" 00:16:58.186 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # true 00:16:58.186 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:58.186 Cannot find device "nvmf_tgt_br2" 00:16:58.186 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # true 00:16:58.186 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:58.186 Cannot find device "nvmf_br" 00:16:58.186 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # true 00:16:58.186 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:58.186 Cannot find device "nvmf_init_if" 00:16:58.186 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # true 00:16:58.186 12:15:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:58.186 Cannot find device "nvmf_init_if2" 00:16:58.186 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # true 00:16:58.186 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:58.186 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:58.186 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # true 00:16:58.186 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:58.186 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:58.186 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # true 00:16:58.186 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:58.186 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:58.186 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:58.186 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:58.186 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:58.186 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:58.186 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:58.445 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:58.445 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:58.445 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:58.445 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:58.445 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:58.445 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:58.445 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:58.445 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:58.445 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:58.445 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:58.445 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:58.445 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:58.445 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:58.445 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:58.445 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:58.445 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:58.445 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:58.445 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:58.445 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:58.445 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:58.445 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:58.445 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:58.445 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:58.445 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:58.445 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:58.445 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:58.445 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:58.445 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:16:58.445 00:16:58.445 --- 10.0.0.3 ping statistics --- 00:16:58.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:58.445 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:16:58.445 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:58.445 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:58.445 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:16:58.445 00:16:58.445 --- 10.0.0.4 ping statistics --- 00:16:58.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:58.445 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:16:58.445 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:58.445 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:58.445 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:16:58.445 00:16:58.445 --- 10.0.0.1 ping statistics --- 00:16:58.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:58.445 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:16:58.445 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:58.445 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:58.445 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:16:58.445 00:16:58.445 --- 10.0.0.2 ping statistics --- 00:16:58.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:58.445 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:16:58.445 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:58.445 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@461 -- # return 0 00:16:58.445 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:58.445 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:58.445 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:58.445 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:58.445 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:58.445 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:58.445 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:58.445 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:16:58.445 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:58.445 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:58.445 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:58.445 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=64297 00:16:58.445 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 64297 00:16:58.445 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 64297 ']' 00:16:58.445 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.445 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:58.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:58.445 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.445 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:16:58.445 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:58.445 12:15:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:58.445 [2024-11-25 12:15:54.242385] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:16:58.445 [2024-11-25 12:15:54.242449] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:58.703 [2024-11-25 12:15:54.378856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:58.703 [2024-11-25 12:15:54.421977] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:58.703 [2024-11-25 12:15:54.422028] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:58.703 [2024-11-25 12:15:54.422037] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:58.703 [2024-11-25 12:15:54.422043] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:58.703 [2024-11-25 12:15:54.422048] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:58.703 [2024-11-25 12:15:54.422923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:58.703 [2024-11-25 12:15:54.423254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.723 12:15:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:59.723 12:15:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:16:59.723 12:15:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:59.723 12:15:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:59.723 12:15:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:59.723 12:15:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:59.723 12:15:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:59.723 12:15:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.723 12:15:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:59.723 [2024-11-25 12:15:55.196111] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:59.723 12:15:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.723 12:15:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:59.723 12:15:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.723 12:15:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:59.723 12:15:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.723 12:15:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:59.723 12:15:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.723 12:15:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:59.723 [2024-11-25 12:15:55.212219] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:59.723 12:15:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.723 12:15:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:59.723 12:15:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.723 12:15:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:59.723 NULL1 00:16:59.723 12:15:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.723 12:15:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:59.723 12:15:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.723 12:15:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:59.723 Delay0 00:16:59.723 12:15:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.724 12:15:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:59.724 12:15:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.724 12:15:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:16:59.724 12:15:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.724 12:15:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=64348 00:16:59.724 12:15:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:16:59.724 12:15:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:16:59.724 [2024-11-25 12:15:55.396846] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:01.650 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:01.650 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.650 12:15:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:17:01.650 Read completed with error (sct=0, sc=8) 00:17:01.650 starting I/O failed: -6 00:17:01.650 Read completed with error (sct=0, sc=8) 00:17:01.650 Write completed with error (sct=0, sc=8) 00:17:01.650 Read completed with error (sct=0, sc=8) 00:17:01.650 Write completed with error (sct=0, sc=8) 00:17:01.650 starting I/O failed: -6 00:17:01.650 Read completed with error (sct=0, sc=8) 00:17:01.650 Write completed with error (sct=0, sc=8) 00:17:01.650 Read completed with error (sct=0, sc=8) 00:17:01.650 Read completed with error (sct=0, sc=8) 00:17:01.650 starting I/O failed: -6 00:17:01.650 Write completed with error (sct=0, sc=8) 00:17:01.650 Read completed with error (sct=0, sc=8) 00:17:01.650 Read completed with error (sct=0, sc=8) 00:17:01.650 Read completed with error (sct=0, sc=8) 00:17:01.650 starting I/O failed: -6 00:17:01.650 Write completed with error (sct=0, sc=8) 00:17:01.650 Read completed with error (sct=0, sc=8) 00:17:01.650 Read completed with error (sct=0, sc=8) 00:17:01.650 Read completed with error (sct=0, sc=8) 00:17:01.650 starting I/O failed: -6 00:17:01.650 Write completed with error (sct=0, sc=8) 00:17:01.650 Read completed with error (sct=0, sc=8) 00:17:01.650 Read completed with error (sct=0, sc=8) 00:17:01.650 Read completed with error (sct=0, sc=8) 00:17:01.650 starting I/O failed: -6 00:17:01.650 Read completed with error (sct=0, sc=8) 00:17:01.650 Read completed with error (sct=0, sc=8) 00:17:01.650 Write completed with error (sct=0, sc=8) 00:17:01.650 Write completed with error (sct=0, sc=8) 00:17:01.650 starting I/O failed: -6 00:17:01.650 Read completed with error (sct=0, sc=8) 00:17:01.650 Write completed with error (sct=0, sc=8) 00:17:01.650 Read completed with error (sct=0, sc=8) 00:17:01.650 Read completed with error (sct=0, sc=8) 00:17:01.650 starting I/O failed: -6 00:17:01.650 Read completed with error (sct=0, sc=8) 00:17:01.650 Read completed with error (sct=0, sc=8) 00:17:01.650 Read completed with error (sct=0, sc=8) 00:17:01.650 Write completed with error (sct=0, sc=8) 00:17:01.650 starting I/O failed: -6 00:17:01.650 Read completed with error (sct=0, sc=8) 00:17:01.650 Read completed with error (sct=0, sc=8) 00:17:01.650 Read completed with error (sct=0, sc=8) 00:17:01.650 Read completed with error (sct=0, sc=8) 00:17:01.650 starting I/O failed: -6 00:17:01.650 Read completed with error (sct=0, sc=8) 00:17:01.650 Read completed with error (sct=0, sc=8) 00:17:01.650 Write completed with error (sct=0, sc=8) 00:17:01.650 Read completed with error (sct=0, sc=8) 00:17:01.650 starting I/O failed: -6 00:17:01.650 [2024-11-25 12:15:57.431629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9db0000c70 is same with the state(6) to be set 00:17:01.650 Read completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Write completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Write completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Write completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Write completed with error (sct=0, sc=8) 00:17:01.651 starting I/O failed: -6 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Write completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Write completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Write completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Write completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 starting I/O failed: -6 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Write completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Write completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Write completed with error (sct=0, sc=8) 00:17:01.651 starting I/O failed: -6 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Write completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Write completed with error (sct=0, sc=8) 00:17:01.651 Write completed with error (sct=0, sc=8) 00:17:01.651 Write completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Write completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 starting I/O failed: -6 00:17:01.651 Write completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Write completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Write completed with error (sct=0, sc=8) 00:17:01.651 Write completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Write completed with error (sct=0, sc=8) 00:17:01.651 Write completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 starting I/O failed: -6 00:17:01.651 Write completed with error (sct=0, sc=8) 00:17:01.651 Write completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Write completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 starting I/O failed: -6 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Write completed with error (sct=0, sc=8) 00:17:01.651 Write completed with error (sct=0, sc=8) 00:17:01.651 starting I/O failed: -6 00:17:01.651 Write completed with error (sct=0, sc=8) 00:17:01.651 Write completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 starting I/O failed: -6 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 starting I/O failed: -6 00:17:01.651 Write completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 starting I/O failed: -6 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Write completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 starting I/O failed: -6 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Write completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 [2024-11-25 12:15:57.432101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17117e0 is same with the state(6) to be set 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Write completed with error (sct=0, sc=8) 00:17:01.651 Write completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Write completed with error (sct=0, sc=8) 00:17:01.651 Write completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Write completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Write completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Write completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Write completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Write completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Write completed with error (sct=0, sc=8) 00:17:01.651 Write completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Write completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Write completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Write completed with error (sct=0, sc=8) 00:17:01.651 Write completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Write completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Write completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Write completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Write completed with error (sct=0, sc=8) 00:17:01.651 Write completed with error (sct=0, sc=8) 00:17:01.651 Write completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:01.651 Read completed with error (sct=0, sc=8) 00:17:02.587 [2024-11-25 12:15:58.409802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x170cee0 is same with the state(6) to be set 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Write completed with error (sct=0, sc=8) 00:17:02.587 Write completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Write completed with error (sct=0, sc=8) 00:17:02.587 Write completed with error (sct=0, sc=8) 00:17:02.587 Write completed with error (sct=0, sc=8) 00:17:02.587 Write completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Write completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Write completed with error (sct=0, sc=8) 00:17:02.587 Write completed with error (sct=0, sc=8) 00:17:02.587 Write completed with error (sct=0, sc=8) 00:17:02.587 Write completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 [2024-11-25 12:15:58.430912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1713ea0 is same with the state(6) to be set 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Write completed with error (sct=0, sc=8) 00:17:02.587 Write completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Write completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Write completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Write completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Write completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 [2024-11-25 12:15:58.431032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1710a50 is same with the state(6) to be set 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Write completed with error (sct=0, sc=8) 00:17:02.587 Write completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Write completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Write completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Write completed with error (sct=0, sc=8) 00:17:02.587 Write completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 [2024-11-25 12:15:58.431905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9db000d830 is same with the state(6) to be set 00:17:02.587 Write completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Write completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Write completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 Write completed with error (sct=0, sc=8) 00:17:02.587 Read completed with error (sct=0, sc=8) 00:17:02.587 [2024-11-25 12:15:58.432339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9db000d050 is same with the state(6) to be set 00:17:02.587 Initializing NVMe Controllers 00:17:02.587 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:02.587 Controller IO queue size 128, less than required. 00:17:02.587 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:02.587 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:17:02.587 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:17:02.587 Initialization complete. Launching workers. 00:17:02.587 ======================================================== 00:17:02.587 Latency(us) 00:17:02.587 Device Information : IOPS MiB/s Average min max 00:17:02.588 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 169.33 0.08 896420.80 247.75 1010777.84 00:17:02.588 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 159.90 0.08 918419.20 877.71 1010725.08 00:17:02.588 ======================================================== 00:17:02.588 Total : 329.23 0.16 907104.79 247.75 1010777.84 00:17:02.588 00:17:02.588 [2024-11-25 12:15:58.433003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170cee0 (9): Bad file descriptor 00:17:02.588 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:17:02.588 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.588 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:17:02.588 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 64348 00:17:02.588 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:17:03.154 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:17:03.154 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 64348 00:17:03.154 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (64348) - No such process 00:17:03.154 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 64348 00:17:03.154 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:17:03.154 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 64348 00:17:03.154 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:17:03.154 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:03.154 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:17:03.154 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:03.154 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 64348 00:17:03.154 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:17:03.154 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:03.154 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:03.154 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:03.154 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:03.154 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.154 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:17:03.154 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.154 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:03.154 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.154 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:17:03.154 [2024-11-25 12:15:58.955892] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:03.154 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.154 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:03.154 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.154 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:17:03.154 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.154 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=64390 00:17:03.154 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:17:03.154 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 64390 00:17:03.154 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:03.154 12:15:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:17:03.412 [2024-11-25 12:15:59.129935] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:03.672 12:15:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:03.672 12:15:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 64390 00:17:03.672 12:15:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:04.235 12:15:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:04.235 12:15:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 64390 00:17:04.236 12:15:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:04.800 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:04.800 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 64390 00:17:04.800 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:05.364 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:05.364 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 64390 00:17:05.364 12:16:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:05.621 12:16:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:05.621 12:16:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 64390 00:17:05.621 12:16:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:06.186 12:16:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:06.186 12:16:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 64390 00:17:06.186 12:16:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:06.444 Initializing NVMe Controllers 00:17:06.444 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:06.444 Controller IO queue size 128, less than required. 00:17:06.444 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:06.444 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:17:06.444 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:17:06.444 Initialization complete. Launching workers. 00:17:06.444 ======================================================== 00:17:06.444 Latency(us) 00:17:06.444 Device Information : IOPS MiB/s Average min max 00:17:06.444 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002793.88 1000082.62 1009344.20 00:17:06.444 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002734.27 1000095.40 1042317.25 00:17:06.444 ======================================================== 00:17:06.444 Total : 256.00 0.12 1002764.07 1000082.62 1042317.25 00:17:06.444 00:17:06.702 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:06.702 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 64390 00:17:06.702 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (64390) - No such process 00:17:06.702 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 64390 00:17:06.702 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:17:06.702 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:17:06.702 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:06.702 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:17:06.702 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:06.702 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:17:06.702 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:06.702 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:06.702 rmmod nvme_tcp 00:17:06.702 rmmod nvme_fabrics 00:17:06.702 rmmod nvme_keyring 00:17:06.702 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:06.959 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:17:06.959 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:17:06.959 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 64297 ']' 00:17:06.959 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 64297 00:17:06.959 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 64297 ']' 00:17:06.959 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 64297 00:17:06.959 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:17:06.959 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:06.959 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64297 00:17:06.959 killing process with pid 64297 00:17:06.959 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:06.959 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:06.959 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64297' 00:17:06.959 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 64297 00:17:06.959 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 64297 00:17:06.959 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:06.959 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:06.959 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:06.959 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:17:06.959 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:17:06.959 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:17:06.960 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:06.960 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:06.960 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:06.960 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:06.960 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:06.960 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:06.960 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:06.960 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:06.960 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:06.960 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:06.960 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:06.960 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:07.218 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:07.218 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:07.218 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:07.218 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:07.218 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:07.218 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:07.218 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:07.218 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:07.218 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@300 -- # return 0 00:17:07.218 00:17:07.218 real 0m9.235s 00:17:07.218 user 0m28.482s 00:17:07.218 sys 0m1.352s 00:17:07.218 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:07.218 ************************************ 00:17:07.218 END TEST nvmf_delete_subsystem 00:17:07.218 ************************************ 00:17:07.218 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:17:07.218 12:16:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:17:07.218 12:16:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:07.218 12:16:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:07.218 12:16:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:17:07.218 ************************************ 00:17:07.218 START TEST nvmf_host_management 00:17:07.218 ************************************ 00:17:07.218 12:16:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:17:07.218 * Looking for test storage... 00:17:07.218 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:07.218 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:07.218 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:17:07.218 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:07.488 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:07.488 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:07.488 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:07.488 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:07.488 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:17:07.488 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:17:07.488 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:17:07.488 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:17:07.488 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:17:07.488 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:17:07.488 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:17:07.488 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:07.488 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:17:07.488 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:17:07.488 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:07.488 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:07.488 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:17:07.488 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:17:07.488 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:07.488 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:17:07.488 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:17:07.488 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:17:07.488 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:17:07.488 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:07.488 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:17:07.488 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:17:07.488 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:07.488 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:07.488 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:07.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.489 --rc genhtml_branch_coverage=1 00:17:07.489 --rc genhtml_function_coverage=1 00:17:07.489 --rc genhtml_legend=1 00:17:07.489 --rc geninfo_all_blocks=1 00:17:07.489 --rc geninfo_unexecuted_blocks=1 00:17:07.489 00:17:07.489 ' 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:07.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.489 --rc genhtml_branch_coverage=1 00:17:07.489 --rc genhtml_function_coverage=1 00:17:07.489 --rc genhtml_legend=1 00:17:07.489 --rc geninfo_all_blocks=1 00:17:07.489 --rc geninfo_unexecuted_blocks=1 00:17:07.489 00:17:07.489 ' 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:07.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.489 --rc genhtml_branch_coverage=1 00:17:07.489 --rc genhtml_function_coverage=1 00:17:07.489 --rc genhtml_legend=1 00:17:07.489 --rc geninfo_all_blocks=1 00:17:07.489 --rc geninfo_unexecuted_blocks=1 00:17:07.489 00:17:07.489 ' 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:07.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:07.489 --rc genhtml_branch_coverage=1 00:17:07.489 --rc genhtml_function_coverage=1 00:17:07.489 --rc genhtml_legend=1 00:17:07.489 --rc geninfo_all_blocks=1 00:17:07.489 --rc geninfo_unexecuted_blocks=1 00:17:07.489 00:17:07.489 ' 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:07.489 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:07.489 Cannot find device "nvmf_init_br" 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:07.489 Cannot find device "nvmf_init_br2" 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:07.489 Cannot find device "nvmf_tgt_br" 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:17:07.489 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:07.490 Cannot find device "nvmf_tgt_br2" 00:17:07.490 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:17:07.490 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:07.490 Cannot find device "nvmf_init_br" 00:17:07.490 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:17:07.490 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:07.490 Cannot find device "nvmf_init_br2" 00:17:07.490 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:17:07.490 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:07.490 Cannot find device "nvmf_tgt_br" 00:17:07.490 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:17:07.490 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:07.490 Cannot find device "nvmf_tgt_br2" 00:17:07.490 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:17:07.490 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:07.490 Cannot find device "nvmf_br" 00:17:07.490 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:17:07.490 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:07.490 Cannot find device "nvmf_init_if" 00:17:07.490 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:17:07.490 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:07.490 Cannot find device "nvmf_init_if2" 00:17:07.490 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:17:07.490 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:07.490 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:07.490 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:17:07.490 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:07.490 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:07.490 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:17:07.490 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:07.490 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:07.490 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:07.490 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:07.490 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:07.490 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:07.490 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:07.490 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:07.490 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:07.490 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:07.490 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:07.490 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:07.490 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:07.490 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:07.490 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:07.490 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:07.490 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:07.490 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:07.490 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:07.490 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:07.490 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:07.490 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:07.748 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:07.748 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:07.748 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:07.748 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:07.748 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:07.748 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:07.748 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:07.748 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:07.748 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:07.748 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:07.748 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:07.748 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:07.748 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:17:07.748 00:17:07.748 --- 10.0.0.3 ping statistics --- 00:17:07.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.748 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:17:07.748 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:07.748 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:07.748 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.034 ms 00:17:07.748 00:17:07.748 --- 10.0.0.4 ping statistics --- 00:17:07.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.748 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:17:07.748 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:07.748 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:07.748 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:17:07.748 00:17:07.748 --- 10.0.0.1 ping statistics --- 00:17:07.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.748 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:17:07.748 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:07.748 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:07.748 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:17:07.748 00:17:07.748 --- 10.0.0.2 ping statistics --- 00:17:07.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:07.748 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:17:07.748 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:07.748 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:17:07.748 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:07.748 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:07.748 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:07.748 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:07.748 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:07.748 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:07.748 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:07.748 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:17:07.748 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:17:07.748 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:17:07.748 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:07.748 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:07.748 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:07.748 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=64685 00:17:07.748 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 64685 00:17:07.748 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 64685 ']' 00:17:07.748 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:07.748 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:07.748 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:07.748 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:07.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:07.748 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:07.749 12:16:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:07.749 [2024-11-25 12:16:03.514215] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:17:07.749 [2024-11-25 12:16:03.514293] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:08.007 [2024-11-25 12:16:03.663169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:08.007 [2024-11-25 12:16:03.705997] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:08.007 [2024-11-25 12:16:03.706030] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:08.007 [2024-11-25 12:16:03.706036] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:08.007 [2024-11-25 12:16:03.706041] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:08.007 [2024-11-25 12:16:03.706046] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:08.007 [2024-11-25 12:16:03.706740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:08.007 [2024-11-25 12:16:03.706856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:08.007 [2024-11-25 12:16:03.707239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:08.007 [2024-11-25 12:16:03.707476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:08.573 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:08.573 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:17:08.573 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:08.573 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:08.573 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:08.573 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:08.573 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:08.573 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.573 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:08.831 [2024-11-25 12:16:04.448732] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:08.831 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.831 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:17:08.831 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:08.831 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:08.831 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:17:08.831 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:17:08.831 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:17:08.831 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.831 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:08.831 Malloc0 00:17:08.831 [2024-11-25 12:16:04.515944] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:08.831 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.831 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:17:08.831 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:08.831 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:08.831 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=64757 00:17:08.831 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 64757 /var/tmp/bdevperf.sock 00:17:08.831 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 64757 ']' 00:17:08.831 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:08.831 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:08.831 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:08.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:08.831 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:08.831 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:08.831 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:08.831 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:17:08.831 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:17:08.831 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:17:08.831 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:08.831 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:08.831 { 00:17:08.831 "params": { 00:17:08.831 "name": "Nvme$subsystem", 00:17:08.831 "trtype": "$TEST_TRANSPORT", 00:17:08.831 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:08.831 "adrfam": "ipv4", 00:17:08.831 "trsvcid": "$NVMF_PORT", 00:17:08.831 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:08.831 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:08.831 "hdgst": ${hdgst:-false}, 00:17:08.831 "ddgst": ${ddgst:-false} 00:17:08.832 }, 00:17:08.832 "method": "bdev_nvme_attach_controller" 00:17:08.832 } 00:17:08.832 EOF 00:17:08.832 )") 00:17:08.832 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:17:08.832 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:17:08.832 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:17:08.832 12:16:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:08.832 "params": { 00:17:08.832 "name": "Nvme0", 00:17:08.832 "trtype": "tcp", 00:17:08.832 "traddr": "10.0.0.3", 00:17:08.832 "adrfam": "ipv4", 00:17:08.832 "trsvcid": "4420", 00:17:08.832 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:08.832 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:08.832 "hdgst": false, 00:17:08.832 "ddgst": false 00:17:08.832 }, 00:17:08.832 "method": "bdev_nvme_attach_controller" 00:17:08.832 }' 00:17:08.832 [2024-11-25 12:16:04.602999] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:17:08.832 [2024-11-25 12:16:04.603063] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64757 ] 00:17:09.090 [2024-11-25 12:16:04.729586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.090 [2024-11-25 12:16:04.762032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.090 Running I/O for 10 seconds... 00:17:09.656 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:09.656 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:17:09.656 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:09.656 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.656 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:09.656 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.656 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:09.656 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:17:09.656 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:09.656 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:17:09.656 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:17:09.656 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:17:09.656 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:17:09.656 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:17:09.656 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:17:09.656 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.656 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:09.656 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:17:09.656 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.656 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1219 00:17:09.656 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1219 -ge 100 ']' 00:17:09.656 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:17:09.656 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:17:09.657 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:17:09.657 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:09.657 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.657 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:09.657 [2024-11-25 12:16:05.517353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.657 [2024-11-25 12:16:05.517387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.657 [2024-11-25 12:16:05.517400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.657 [2024-11-25 12:16:05.517405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.657 [2024-11-25 12:16:05.517411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.657 [2024-11-25 12:16:05.517416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.657 [2024-11-25 12:16:05.517422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.657 [2024-11-25 12:16:05.517426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.657 [2024-11-25 12:16:05.517432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.657 [2024-11-25 12:16:05.517437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.657 [2024-11-25 12:16:05.517443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.657 [2024-11-25 12:16:05.517449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.657 [2024-11-25 12:16:05.517455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.657 [2024-11-25 12:16:05.517459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.657 [2024-11-25 12:16:05.517465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.657 [2024-11-25 12:16:05.517470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.657 [2024-11-25 12:16:05.517476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.657 [2024-11-25 12:16:05.517480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.657 [2024-11-25 12:16:05.517486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.657 [2024-11-25 12:16:05.517491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.657 [2024-11-25 12:16:05.517497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:45440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.657 [2024-11-25 12:16:05.517505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.657 [2024-11-25 12:16:05.517511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.657 [2024-11-25 12:16:05.517516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.657 [2024-11-25 12:16:05.517522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:45696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.657 [2024-11-25 12:16:05.517526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.657 [2024-11-25 12:16:05.517532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:45824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.657 [2024-11-25 12:16:05.517536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.657 [2024-11-25 12:16:05.517542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:45952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.657 [2024-11-25 12:16:05.517547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.657 [2024-11-25 12:16:05.517552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:46080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.657 [2024-11-25 12:16:05.517557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.657 [2024-11-25 12:16:05.517570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:46208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.657 [2024-11-25 12:16:05.517575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.657 [2024-11-25 12:16:05.517582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:46336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.657 [2024-11-25 12:16:05.517587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.657 [2024-11-25 12:16:05.517593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:46464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.657 [2024-11-25 12:16:05.517597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.657 [2024-11-25 12:16:05.517604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:46592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.657 [2024-11-25 12:16:05.517608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.657 [2024-11-25 12:16:05.517614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:46720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.657 [2024-11-25 12:16:05.517618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.657 [2024-11-25 12:16:05.517624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:46848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.657 [2024-11-25 12:16:05.517628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.657 [2024-11-25 12:16:05.517634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:46976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.657 [2024-11-25 12:16:05.517638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.657 [2024-11-25 12:16:05.517644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:47104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.657 [2024-11-25 12:16:05.517649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.657 [2024-11-25 12:16:05.517654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:47232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.657 [2024-11-25 12:16:05.517659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.657 [2024-11-25 12:16:05.517664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:47360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.657 [2024-11-25 12:16:05.517669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.657 [2024-11-25 12:16:05.517675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:47488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.657 [2024-11-25 12:16:05.517681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.657 [2024-11-25 12:16:05.517687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:47616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.657 [2024-11-25 12:16:05.517692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.657 [2024-11-25 12:16:05.517698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:47744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.657 [2024-11-25 12:16:05.517702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.657 [2024-11-25 12:16:05.517708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:47872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.657 [2024-11-25 12:16:05.517712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.657 [2024-11-25 12:16:05.517718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:48000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.657 [2024-11-25 12:16:05.517723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.657 [2024-11-25 12:16:05.517728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:48128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.657 [2024-11-25 12:16:05.517734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.657 [2024-11-25 12:16:05.517740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:48256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.657 [2024-11-25 12:16:05.517745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.657 [2024-11-25 12:16:05.517759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:48384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.657 [2024-11-25 12:16:05.517764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.657 [2024-11-25 12:16:05.517771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:48512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.657 [2024-11-25 12:16:05.517775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.657 [2024-11-25 12:16:05.517781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:48640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.657 [2024-11-25 12:16:05.517785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.657 [2024-11-25 12:16:05.517791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:48768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.657 [2024-11-25 12:16:05.517796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.657 [2024-11-25 12:16:05.517802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:48896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.658 [2024-11-25 12:16:05.517806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.658 [2024-11-25 12:16:05.517812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:49024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.658 [2024-11-25 12:16:05.517816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.658 [2024-11-25 12:16:05.517822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.658 [2024-11-25 12:16:05.517826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.658 [2024-11-25 12:16:05.517832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.658 [2024-11-25 12:16:05.517836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.658 [2024-11-25 12:16:05.517842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.658 [2024-11-25 12:16:05.517847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.658 [2024-11-25 12:16:05.517853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.658 [2024-11-25 12:16:05.517859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.658 [2024-11-25 12:16:05.517865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.658 [2024-11-25 12:16:05.517870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.658 [2024-11-25 12:16:05.517876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.658 [2024-11-25 12:16:05.517880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.658 [2024-11-25 12:16:05.517886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.658 [2024-11-25 12:16:05.517891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.658 [2024-11-25 12:16:05.517897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.658 [2024-11-25 12:16:05.517901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.658 [2024-11-25 12:16:05.517907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.658 [2024-11-25 12:16:05.517912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.658 [2024-11-25 12:16:05.517918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.658 [2024-11-25 12:16:05.517922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.658 [2024-11-25 12:16:05.517928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.658 [2024-11-25 12:16:05.517933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.658 [2024-11-25 12:16:05.517939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.658 [2024-11-25 12:16:05.517943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.658 [2024-11-25 12:16:05.517949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.658 [2024-11-25 12:16:05.517953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.658 [2024-11-25 12:16:05.517959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.658 [2024-11-25 12:16:05.517964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.658 [2024-11-25 12:16:05.517969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.658 [2024-11-25 12:16:05.517974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.658 [2024-11-25 12:16:05.517980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.658 [2024-11-25 12:16:05.517984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.658 [2024-11-25 12:16:05.517991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.658 [2024-11-25 12:16:05.517995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.658 [2024-11-25 12:16:05.518001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.658 [2024-11-25 12:16:05.518006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.658 [2024-11-25 12:16:05.518012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.658 [2024-11-25 12:16:05.518016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.658 [2024-11-25 12:16:05.518022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.658 [2024-11-25 12:16:05.518027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.658 [2024-11-25 12:16:05.518034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.658 [2024-11-25 12:16:05.518038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.658 [2024-11-25 12:16:05.518044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.658 [2024-11-25 12:16:05.518049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.658 [2024-11-25 12:16:05.518055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.658 [2024-11-25 12:16:05.518060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.658 [2024-11-25 12:16:05.518066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.658 [2024-11-25 12:16:05.518070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.658 [2024-11-25 12:16:05.518076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:09.658 [2024-11-25 12:16:05.518081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:09.658 task offset: 44160 on job bdev=Nvme0n1 fails 00:17:09.658 00:17:09.658 Latency(us) 00:17:09.658 [2024-11-25T12:16:05.534Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:09.658 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:09.658 Job: Nvme0n1 ended in about 0.61 seconds with error 00:17:09.658 Verification LBA range: start 0x0 length 0x400 00:17:09.658 Nvme0n1 : 0.61 2202.18 137.64 104.87 0.00 27191.42 1625.80 25206.15 00:17:09.658 [2024-11-25T12:16:05.534Z] =================================================================================================================== 00:17:09.658 [2024-11-25T12:16:05.534Z] Total : 2202.18 137.64 104.87 0.00 27191.42 1625.80 25206.15 00:17:09.658 [2024-11-25 12:16:05.519039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:17:09.658 [2024-11-25 12:16:05.520578] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:09.658 [2024-11-25 12:16:05.520597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa04660 (9): Bad file descriptor 00:17:09.658 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.658 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:09.658 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.658 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:09.658 [2024-11-25 12:16:05.527778] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:17:09.916 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.916 12:16:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:17:10.848 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 64757 00:17:10.848 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (64757) - No such process 00:17:10.848 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:17:10.848 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:17:10.848 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:17:10.848 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:17:10.848 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:17:10.848 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:17:10.848 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:10.848 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:10.848 { 00:17:10.848 "params": { 00:17:10.848 "name": "Nvme$subsystem", 00:17:10.848 "trtype": "$TEST_TRANSPORT", 00:17:10.848 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:10.848 "adrfam": "ipv4", 00:17:10.848 "trsvcid": "$NVMF_PORT", 00:17:10.848 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:10.848 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:10.848 "hdgst": ${hdgst:-false}, 00:17:10.848 "ddgst": ${ddgst:-false} 00:17:10.848 }, 00:17:10.848 "method": "bdev_nvme_attach_controller" 00:17:10.848 } 00:17:10.848 EOF 00:17:10.848 )") 00:17:10.848 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:17:10.848 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:17:10.848 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:17:10.848 12:16:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:10.848 "params": { 00:17:10.848 "name": "Nvme0", 00:17:10.848 "trtype": "tcp", 00:17:10.848 "traddr": "10.0.0.3", 00:17:10.848 "adrfam": "ipv4", 00:17:10.848 "trsvcid": "4420", 00:17:10.848 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:10.848 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:10.848 "hdgst": false, 00:17:10.848 "ddgst": false 00:17:10.848 }, 00:17:10.848 "method": "bdev_nvme_attach_controller" 00:17:10.848 }' 00:17:10.848 [2024-11-25 12:16:06.569202] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:17:10.848 [2024-11-25 12:16:06.569261] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64808 ] 00:17:10.848 [2024-11-25 12:16:06.706564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.106 [2024-11-25 12:16:06.743059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.106 Running I/O for 1 seconds... 00:17:12.038 1984.00 IOPS, 124.00 MiB/s 00:17:12.038 Latency(us) 00:17:12.038 [2024-11-25T12:16:07.914Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.038 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:12.038 Verification LBA range: start 0x0 length 0x400 00:17:12.038 Nvme0n1 : 1.00 2037.95 127.37 0.00 0.00 30826.33 5368.91 29037.49 00:17:12.038 [2024-11-25T12:16:07.914Z] =================================================================================================================== 00:17:12.038 [2024-11-25T12:16:07.914Z] Total : 2037.95 127.37 0.00 0.00 30826.33 5368.91 29037.49 00:17:12.296 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:17:12.296 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:17:12.296 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:17:12.296 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:17:12.296 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:17:12.296 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:12.296 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:17:12.296 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:12.296 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:17:12.296 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:12.296 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:12.296 rmmod nvme_tcp 00:17:12.296 rmmod nvme_fabrics 00:17:12.296 rmmod nvme_keyring 00:17:12.296 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:12.296 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:17:12.296 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:17:12.296 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 64685 ']' 00:17:12.296 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 64685 00:17:12.296 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 64685 ']' 00:17:12.296 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 64685 00:17:12.296 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:17:12.296 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:12.296 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64685 00:17:12.296 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:12.296 killing process with pid 64685 00:17:12.296 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:12.296 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64685' 00:17:12.296 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 64685 00:17:12.296 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 64685 00:17:12.555 [2024-11-25 12:16:08.229284] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:17:12.555 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:12.555 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:12.555 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:12.555 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:17:12.555 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:12.555 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:17:12.555 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:17:12.555 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:12.555 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:12.555 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:12.555 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:12.555 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:12.555 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:12.555 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:12.555 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:12.555 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:12.555 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:12.555 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:12.555 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:12.555 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:12.814 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:12.814 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:12.814 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:12.814 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:12.814 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:12.814 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:12.814 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:17:12.814 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:17:12.814 00:17:12.814 real 0m5.504s 00:17:12.814 user 0m20.649s 00:17:12.814 sys 0m1.062s 00:17:12.814 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:12.814 ************************************ 00:17:12.814 END TEST nvmf_host_management 00:17:12.814 ************************************ 00:17:12.814 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:17:12.814 12:16:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:12.814 12:16:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:12.814 12:16:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:12.814 12:16:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:17:12.814 ************************************ 00:17:12.814 START TEST nvmf_lvol 00:17:12.814 ************************************ 00:17:12.814 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:17:12.814 * Looking for test storage... 00:17:12.814 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:12.814 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:12.814 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:17:12.814 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:12.814 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:12.814 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:12.814 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:12.814 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:13.074 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:17:13.074 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:17:13.074 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:17:13.074 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:17:13.074 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:17:13.074 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:17:13.074 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:17:13.074 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:13.074 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:17:13.074 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:17:13.074 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:13.074 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:13.074 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:17:13.074 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:17:13.074 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:13.074 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:17:13.074 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:17:13.074 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:17:13.074 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:17:13.074 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:13.074 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:17:13.074 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:17:13.074 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:13.074 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:13.074 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:17:13.074 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:13.074 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:13.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.074 --rc genhtml_branch_coverage=1 00:17:13.074 --rc genhtml_function_coverage=1 00:17:13.074 --rc genhtml_legend=1 00:17:13.074 --rc geninfo_all_blocks=1 00:17:13.074 --rc geninfo_unexecuted_blocks=1 00:17:13.074 00:17:13.074 ' 00:17:13.074 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:13.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.074 --rc genhtml_branch_coverage=1 00:17:13.074 --rc genhtml_function_coverage=1 00:17:13.074 --rc genhtml_legend=1 00:17:13.074 --rc geninfo_all_blocks=1 00:17:13.074 --rc geninfo_unexecuted_blocks=1 00:17:13.074 00:17:13.074 ' 00:17:13.074 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:13.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.074 --rc genhtml_branch_coverage=1 00:17:13.074 --rc genhtml_function_coverage=1 00:17:13.074 --rc genhtml_legend=1 00:17:13.074 --rc geninfo_all_blocks=1 00:17:13.074 --rc geninfo_unexecuted_blocks=1 00:17:13.074 00:17:13.074 ' 00:17:13.074 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:13.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.074 --rc genhtml_branch_coverage=1 00:17:13.074 --rc genhtml_function_coverage=1 00:17:13.074 --rc genhtml_legend=1 00:17:13.074 --rc geninfo_all_blocks=1 00:17:13.074 --rc geninfo_unexecuted_blocks=1 00:17:13.074 00:17:13.074 ' 00:17:13.074 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:13.074 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:17:13.074 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:13.074 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:13.074 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:13.074 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:13.074 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:13.074 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:13.074 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:13.074 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:13.074 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:13.074 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:13.075 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:13.075 Cannot find device "nvmf_init_br" 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:13.075 Cannot find device "nvmf_init_br2" 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:13.075 Cannot find device "nvmf_tgt_br" 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:13.075 Cannot find device "nvmf_tgt_br2" 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:13.075 Cannot find device "nvmf_init_br" 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:13.075 Cannot find device "nvmf_init_br2" 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:13.075 Cannot find device "nvmf_tgt_br" 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:13.075 Cannot find device "nvmf_tgt_br2" 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:13.075 Cannot find device "nvmf_br" 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:13.075 Cannot find device "nvmf_init_if" 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:13.075 Cannot find device "nvmf_init_if2" 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:13.075 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:13.075 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:17:13.076 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:13.076 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:13.076 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:17:13.076 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:13.076 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:13.076 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:13.076 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:13.076 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:13.076 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:13.076 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:13.076 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:13.076 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:13.076 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:13.076 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:13.076 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:13.076 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:13.076 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:13.076 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:13.076 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:13.076 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:13.333 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:13.333 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:13.333 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:13.333 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:13.333 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:13.333 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:13.333 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:13.333 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:13.333 12:16:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:13.333 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:13.333 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:13.333 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:13.333 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:13.333 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:13.333 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:13.333 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:13.333 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:13.333 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:17:13.333 00:17:13.333 --- 10.0.0.3 ping statistics --- 00:17:13.333 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.334 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:17:13.334 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:13.334 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:13.334 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.090 ms 00:17:13.334 00:17:13.334 --- 10.0.0.4 ping statistics --- 00:17:13.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.334 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:17:13.334 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:13.334 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:13.334 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.016 ms 00:17:13.334 00:17:13.334 --- 10.0.0.1 ping statistics --- 00:17:13.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.334 rtt min/avg/max/mdev = 0.016/0.016/0.016/0.000 ms 00:17:13.334 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:13.334 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:13.334 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:17:13.334 00:17:13.334 --- 10.0.0.2 ping statistics --- 00:17:13.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.334 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:17:13.334 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:13.334 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:17:13.334 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:13.334 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:13.334 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:13.334 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:13.334 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:13.334 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:13.334 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:13.334 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:17:13.334 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:13.334 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:13.334 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:13.334 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=65064 00:17:13.334 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 65064 00:17:13.334 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 65064 ']' 00:17:13.334 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:13.334 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:13.334 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:13.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:13.334 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:13.334 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:13.334 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:13.334 [2024-11-25 12:16:09.092232] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:17:13.334 [2024-11-25 12:16:09.092309] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:13.592 [2024-11-25 12:16:09.235671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:13.592 [2024-11-25 12:16:09.272340] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:13.592 [2024-11-25 12:16:09.272383] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:13.592 [2024-11-25 12:16:09.272390] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:13.592 [2024-11-25 12:16:09.272395] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:13.592 [2024-11-25 12:16:09.272399] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:13.592 [2024-11-25 12:16:09.273080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:13.592 [2024-11-25 12:16:09.273144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:13.592 [2024-11-25 12:16:09.273146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.157 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:14.157 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:17:14.157 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:14.157 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:14.157 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:14.157 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:14.157 12:16:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:14.414 [2024-11-25 12:16:10.174369] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:14.414 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:14.670 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:17:14.670 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:14.931 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:17:14.931 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:17:15.187 12:16:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:17:15.444 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=242db5fd-684d-4a92-a08d-84c9a865a17a 00:17:15.444 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 242db5fd-684d-4a92-a08d-84c9a865a17a lvol 20 00:17:15.701 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=6d63ad17-4607-4940-9c4e-526fd0a8fe80 00:17:15.701 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:15.959 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6d63ad17-4607-4940-9c4e-526fd0a8fe80 00:17:16.216 12:16:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:17:16.216 [2024-11-25 12:16:12.024847] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:16.216 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:17:16.474 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=65206 00:17:16.474 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:17:16.474 12:16:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:17:17.407 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 6d63ad17-4607-4940-9c4e-526fd0a8fe80 MY_SNAPSHOT 00:17:18.014 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=2ecf41fe-9f07-4472-b89f-fa76d7647cd9 00:17:18.014 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 6d63ad17-4607-4940-9c4e-526fd0a8fe80 30 00:17:18.014 12:16:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 2ecf41fe-9f07-4472-b89f-fa76d7647cd9 MY_CLONE 00:17:18.275 12:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=ff3f9153-1d31-4ebf-907e-966683000189 00:17:18.275 12:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate ff3f9153-1d31-4ebf-907e-966683000189 00:17:18.842 12:16:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 65206 00:17:26.959 Initializing NVMe Controllers 00:17:26.959 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:17:26.959 Controller IO queue size 128, less than required. 00:17:26.959 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:26.959 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:17:26.959 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:17:26.959 Initialization complete. Launching workers. 00:17:26.959 ======================================================== 00:17:26.959 Latency(us) 00:17:26.959 Device Information : IOPS MiB/s Average min max 00:17:26.959 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15258.80 59.60 8391.75 1881.52 49348.51 00:17:26.959 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 14551.70 56.84 8798.01 1667.57 109004.68 00:17:26.959 ======================================================== 00:17:26.959 Total : 29810.50 116.45 8590.06 1667.57 109004.68 00:17:26.959 00:17:26.959 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:26.959 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 6d63ad17-4607-4940-9c4e-526fd0a8fe80 00:17:27.217 12:16:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 242db5fd-684d-4a92-a08d-84c9a865a17a 00:17:27.475 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:17:27.475 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:17:27.475 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:17:27.475 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:27.475 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:17:27.475 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:27.475 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:17:27.475 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:27.475 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:27.475 rmmod nvme_tcp 00:17:27.475 rmmod nvme_fabrics 00:17:27.475 rmmod nvme_keyring 00:17:27.475 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:27.475 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:17:27.475 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:17:27.475 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 65064 ']' 00:17:27.475 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 65064 00:17:27.475 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 65064 ']' 00:17:27.475 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 65064 00:17:27.475 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:17:27.475 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:27.475 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65064 00:17:27.475 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:27.475 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:27.475 killing process with pid 65064 00:17:27.475 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65064' 00:17:27.475 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 65064 00:17:27.475 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 65064 00:17:27.733 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:27.733 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:27.733 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:27.733 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:17:27.733 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:17:27.733 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:17:27.733 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:27.733 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:27.733 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:27.733 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:27.733 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:27.733 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:27.733 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:27.733 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:27.733 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:27.733 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:27.733 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:27.733 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:27.733 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:27.733 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:27.733 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:27.992 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:27.992 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:27.992 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.992 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:27.992 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.992 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:17:27.992 00:17:27.992 real 0m15.107s 00:17:27.992 user 1m3.793s 00:17:27.992 sys 0m3.101s 00:17:27.992 ************************************ 00:17:27.992 END TEST nvmf_lvol 00:17:27.992 ************************************ 00:17:27.992 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:27.992 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:27.992 12:16:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:27.992 12:16:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:27.992 12:16:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:27.992 12:16:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:17:27.992 ************************************ 00:17:27.992 START TEST nvmf_lvs_grow 00:17:27.992 ************************************ 00:17:27.992 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:27.992 * Looking for test storage... 00:17:27.992 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:27.992 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:27.992 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:27.992 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:17:27.992 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:27.992 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:27.992 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:27.992 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:27.992 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:17:27.992 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:17:27.992 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:17:27.992 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:17:27.992 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:17:27.992 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:17:27.992 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:17:27.992 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:27.992 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:17:27.992 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:17:27.992 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:27.992 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:28.251 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:17:28.251 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:17:28.251 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:28.251 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:17:28.251 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:17:28.251 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:17:28.251 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:17:28.251 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:28.251 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:17:28.251 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:17:28.251 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:28.251 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:28.251 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:17:28.251 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:28.251 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:28.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.251 --rc genhtml_branch_coverage=1 00:17:28.251 --rc genhtml_function_coverage=1 00:17:28.251 --rc genhtml_legend=1 00:17:28.251 --rc geninfo_all_blocks=1 00:17:28.251 --rc geninfo_unexecuted_blocks=1 00:17:28.251 00:17:28.251 ' 00:17:28.251 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:28.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.251 --rc genhtml_branch_coverage=1 00:17:28.251 --rc genhtml_function_coverage=1 00:17:28.251 --rc genhtml_legend=1 00:17:28.251 --rc geninfo_all_blocks=1 00:17:28.251 --rc geninfo_unexecuted_blocks=1 00:17:28.251 00:17:28.251 ' 00:17:28.251 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:28.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.251 --rc genhtml_branch_coverage=1 00:17:28.251 --rc genhtml_function_coverage=1 00:17:28.251 --rc genhtml_legend=1 00:17:28.251 --rc geninfo_all_blocks=1 00:17:28.251 --rc geninfo_unexecuted_blocks=1 00:17:28.251 00:17:28.251 ' 00:17:28.251 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:28.251 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:28.251 --rc genhtml_branch_coverage=1 00:17:28.251 --rc genhtml_function_coverage=1 00:17:28.251 --rc genhtml_legend=1 00:17:28.251 --rc geninfo_all_blocks=1 00:17:28.251 --rc geninfo_unexecuted_blocks=1 00:17:28.251 00:17:28.251 ' 00:17:28.251 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:28.251 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:17:28.251 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:28.251 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:28.251 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:28.251 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:28.251 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:28.251 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:28.251 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:28.251 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:28.251 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:28.251 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:28.251 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:17:28.251 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:17:28.251 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:28.251 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:28.251 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:28.251 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:28.251 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:28.251 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:17:28.251 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:28.251 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:28.251 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:28.251 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.251 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.251 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.251 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:17:28.251 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:28.252 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:28.252 Cannot find device "nvmf_init_br" 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:28.252 Cannot find device "nvmf_init_br2" 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:28.252 Cannot find device "nvmf_tgt_br" 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:28.252 Cannot find device "nvmf_tgt_br2" 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:28.252 Cannot find device "nvmf_init_br" 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:28.252 Cannot find device "nvmf_init_br2" 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:28.252 Cannot find device "nvmf_tgt_br" 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:28.252 Cannot find device "nvmf_tgt_br2" 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:28.252 Cannot find device "nvmf_br" 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:17:28.252 12:16:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:28.252 Cannot find device "nvmf_init_if" 00:17:28.252 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:17:28.252 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:28.252 Cannot find device "nvmf_init_if2" 00:17:28.252 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:17:28.252 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:28.252 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:28.252 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:17:28.252 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:28.252 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:28.252 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:17:28.252 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:28.252 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:28.252 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:28.252 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:28.252 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:28.252 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:28.252 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:28.252 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:28.252 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:28.252 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:28.252 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:28.252 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:28.252 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:28.252 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:28.252 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:28.252 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:28.252 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:28.252 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:28.252 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:28.253 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:28.253 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:28.253 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:28.253 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:28.510 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:28.510 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:28.510 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:28.510 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:28.510 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:28.510 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:28.510 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:28.510 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:28.510 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:28.510 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:28.510 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:28.510 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:17:28.510 00:17:28.510 --- 10.0.0.3 ping statistics --- 00:17:28.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.510 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:17:28.510 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:28.510 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:28.510 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:17:28.510 00:17:28.510 --- 10.0.0.4 ping statistics --- 00:17:28.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.510 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:17:28.510 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:28.510 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:28.510 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.015 ms 00:17:28.510 00:17:28.510 --- 10.0.0.1 ping statistics --- 00:17:28.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.510 rtt min/avg/max/mdev = 0.015/0.015/0.015/0.000 ms 00:17:28.510 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:28.510 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:28.510 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:17:28.510 00:17:28.510 --- 10.0.0.2 ping statistics --- 00:17:28.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.510 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:17:28.510 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:28.510 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:17:28.510 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:28.510 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:28.510 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:28.510 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:28.510 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:28.510 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:28.510 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:28.510 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:17:28.510 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:28.510 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:28.510 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:28.510 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=65626 00:17:28.510 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:28.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:28.510 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 65626 00:17:28.510 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 65626 ']' 00:17:28.510 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.510 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:28.511 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.511 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:28.511 12:16:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:28.511 [2024-11-25 12:16:24.237810] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:17:28.511 [2024-11-25 12:16:24.237987] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:28.511 [2024-11-25 12:16:24.377261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.768 [2024-11-25 12:16:24.413393] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:28.768 [2024-11-25 12:16:24.413433] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:28.768 [2024-11-25 12:16:24.413439] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:28.768 [2024-11-25 12:16:24.413444] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:28.768 [2024-11-25 12:16:24.413449] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:28.768 [2024-11-25 12:16:24.413720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.332 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:29.332 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:17:29.332 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:29.332 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:29.332 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:29.590 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:29.590 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:29.590 [2024-11-25 12:16:25.397149] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:29.590 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:17:29.590 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:29.590 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:29.590 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:29.590 ************************************ 00:17:29.590 START TEST lvs_grow_clean 00:17:29.590 ************************************ 00:17:29.590 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:17:29.590 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:29.590 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:29.590 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:29.590 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:29.590 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:29.590 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:29.590 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:17:29.590 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:17:29.590 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:29.847 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:29.847 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:30.105 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=5f908b73-e657-46d8-bc0c-71108ce645eb 00:17:30.105 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f908b73-e657-46d8-bc0c-71108ce645eb 00:17:30.106 12:16:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:30.364 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:30.364 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:30.364 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5f908b73-e657-46d8-bc0c-71108ce645eb lvol 150 00:17:30.622 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=57d7481a-1adc-434b-8e26-d21870923e98 00:17:30.622 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:17:30.622 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:30.622 [2024-11-25 12:16:26.437026] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:30.622 [2024-11-25 12:16:26.437083] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:30.622 true 00:17:30.622 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f908b73-e657-46d8-bc0c-71108ce645eb 00:17:30.623 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:30.881 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:30.881 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:31.139 12:16:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 57d7481a-1adc-434b-8e26-d21870923e98 00:17:31.398 12:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:17:31.398 [2024-11-25 12:16:27.249465] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:31.398 12:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:17:31.656 12:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:31.656 12:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=65782 00:17:31.656 12:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:31.656 12:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 65782 /var/tmp/bdevperf.sock 00:17:31.656 12:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 65782 ']' 00:17:31.656 12:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:31.656 12:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:31.656 12:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:31.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:31.656 12:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:31.656 12:16:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:31.656 [2024-11-25 12:16:27.503475] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:17:31.656 [2024-11-25 12:16:27.503542] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65782 ] 00:17:31.913 [2024-11-25 12:16:27.642453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.913 [2024-11-25 12:16:27.684784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:32.846 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:32.847 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:17:32.847 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:32.847 Nvme0n1 00:17:32.847 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:33.107 [ 00:17:33.108 { 00:17:33.108 "aliases": [ 00:17:33.108 "57d7481a-1adc-434b-8e26-d21870923e98" 00:17:33.108 ], 00:17:33.108 "assigned_rate_limits": { 00:17:33.108 "r_mbytes_per_sec": 0, 00:17:33.108 "rw_ios_per_sec": 0, 00:17:33.108 "rw_mbytes_per_sec": 0, 00:17:33.108 "w_mbytes_per_sec": 0 00:17:33.108 }, 00:17:33.108 "block_size": 4096, 00:17:33.108 "claimed": false, 00:17:33.108 "driver_specific": { 00:17:33.108 "mp_policy": "active_passive", 00:17:33.108 "nvme": [ 00:17:33.108 { 00:17:33.108 "ctrlr_data": { 00:17:33.108 "ana_reporting": false, 00:17:33.108 "cntlid": 1, 00:17:33.108 "firmware_revision": "25.01", 00:17:33.108 "model_number": "SPDK bdev Controller", 00:17:33.108 "multi_ctrlr": true, 00:17:33.108 "oacs": { 00:17:33.108 "firmware": 0, 00:17:33.108 "format": 0, 00:17:33.108 "ns_manage": 0, 00:17:33.108 "security": 0 00:17:33.108 }, 00:17:33.108 "serial_number": "SPDK0", 00:17:33.108 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:33.108 "vendor_id": "0x8086" 00:17:33.108 }, 00:17:33.108 "ns_data": { 00:17:33.108 "can_share": true, 00:17:33.108 "id": 1 00:17:33.108 }, 00:17:33.108 "trid": { 00:17:33.108 "adrfam": "IPv4", 00:17:33.108 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:33.108 "traddr": "10.0.0.3", 00:17:33.108 "trsvcid": "4420", 00:17:33.108 "trtype": "TCP" 00:17:33.108 }, 00:17:33.108 "vs": { 00:17:33.108 "nvme_version": "1.3" 00:17:33.108 } 00:17:33.108 } 00:17:33.108 ] 00:17:33.108 }, 00:17:33.108 "memory_domains": [ 00:17:33.108 { 00:17:33.108 "dma_device_id": "system", 00:17:33.108 "dma_device_type": 1 00:17:33.108 } 00:17:33.108 ], 00:17:33.108 "name": "Nvme0n1", 00:17:33.108 "num_blocks": 38912, 00:17:33.108 "numa_id": -1, 00:17:33.108 "product_name": "NVMe disk", 00:17:33.108 "supported_io_types": { 00:17:33.108 "abort": true, 00:17:33.108 "compare": true, 00:17:33.108 "compare_and_write": true, 00:17:33.108 "copy": true, 00:17:33.108 "flush": true, 00:17:33.108 "get_zone_info": false, 00:17:33.108 "nvme_admin": true, 00:17:33.108 "nvme_io": true, 00:17:33.108 "nvme_io_md": false, 00:17:33.108 "nvme_iov_md": false, 00:17:33.108 "read": true, 00:17:33.108 "reset": true, 00:17:33.108 "seek_data": false, 00:17:33.108 "seek_hole": false, 00:17:33.108 "unmap": true, 00:17:33.108 "write": true, 00:17:33.108 "write_zeroes": true, 00:17:33.108 "zcopy": false, 00:17:33.108 "zone_append": false, 00:17:33.108 "zone_management": false 00:17:33.108 }, 00:17:33.108 "uuid": "57d7481a-1adc-434b-8e26-d21870923e98", 00:17:33.108 "zoned": false 00:17:33.108 } 00:17:33.108 ] 00:17:33.108 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=65830 00:17:33.108 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:33.108 12:16:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:33.108 Running I/O for 10 seconds... 00:17:34.482 Latency(us) 00:17:34.482 [2024-11-25T12:16:30.358Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:34.482 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:34.482 Nvme0n1 : 1.00 10000.00 39.06 0.00 0.00 0.00 0.00 0.00 00:17:34.482 [2024-11-25T12:16:30.358Z] =================================================================================================================== 00:17:34.482 [2024-11-25T12:16:30.358Z] Total : 10000.00 39.06 0.00 0.00 0.00 0.00 0.00 00:17:34.482 00:17:35.048 12:16:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5f908b73-e657-46d8-bc0c-71108ce645eb 00:17:35.306 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:35.306 Nvme0n1 : 2.00 10184.50 39.78 0.00 0.00 0.00 0.00 0.00 00:17:35.306 [2024-11-25T12:16:31.182Z] =================================================================================================================== 00:17:35.306 [2024-11-25T12:16:31.182Z] Total : 10184.50 39.78 0.00 0.00 0.00 0.00 0.00 00:17:35.306 00:17:35.306 true 00:17:35.306 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f908b73-e657-46d8-bc0c-71108ce645eb 00:17:35.306 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:35.565 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:35.565 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:35.565 12:16:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 65830 00:17:36.131 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:36.131 Nvme0n1 : 3.00 10268.00 40.11 0.00 0.00 0.00 0.00 0.00 00:17:36.131 [2024-11-25T12:16:32.007Z] =================================================================================================================== 00:17:36.131 [2024-11-25T12:16:32.007Z] Total : 10268.00 40.11 0.00 0.00 0.00 0.00 0.00 00:17:36.131 00:17:37.504 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:37.504 Nvme0n1 : 4.00 9570.25 37.38 0.00 0.00 0.00 0.00 0.00 00:17:37.504 [2024-11-25T12:16:33.380Z] =================================================================================================================== 00:17:37.504 [2024-11-25T12:16:33.380Z] Total : 9570.25 37.38 0.00 0.00 0.00 0.00 0.00 00:17:37.504 00:17:38.438 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:38.438 Nvme0n1 : 5.00 10177.20 39.75 0.00 0.00 0.00 0.00 0.00 00:17:38.438 [2024-11-25T12:16:34.314Z] =================================================================================================================== 00:17:38.438 [2024-11-25T12:16:34.314Z] Total : 10177.20 39.75 0.00 0.00 0.00 0.00 0.00 00:17:38.438 00:17:39.371 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:39.371 Nvme0n1 : 6.00 10508.00 41.05 0.00 0.00 0.00 0.00 0.00 00:17:39.371 [2024-11-25T12:16:35.247Z] =================================================================================================================== 00:17:39.371 [2024-11-25T12:16:35.247Z] Total : 10508.00 41.05 0.00 0.00 0.00 0.00 0.00 00:17:39.371 00:17:40.305 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:40.305 Nvme0n1 : 7.00 10782.14 42.12 0.00 0.00 0.00 0.00 0.00 00:17:40.305 [2024-11-25T12:16:36.181Z] =================================================================================================================== 00:17:40.305 [2024-11-25T12:16:36.181Z] Total : 10782.14 42.12 0.00 0.00 0.00 0.00 0.00 00:17:40.305 00:17:41.239 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:41.239 Nvme0n1 : 8.00 10954.50 42.79 0.00 0.00 0.00 0.00 0.00 00:17:41.239 [2024-11-25T12:16:37.115Z] =================================================================================================================== 00:17:41.239 [2024-11-25T12:16:37.115Z] Total : 10954.50 42.79 0.00 0.00 0.00 0.00 0.00 00:17:41.239 00:17:42.173 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:42.173 Nvme0n1 : 9.00 11080.22 43.28 0.00 0.00 0.00 0.00 0.00 00:17:42.173 [2024-11-25T12:16:38.049Z] =================================================================================================================== 00:17:42.173 [2024-11-25T12:16:38.049Z] Total : 11080.22 43.28 0.00 0.00 0.00 0.00 0.00 00:17:42.173 00:17:43.109 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:43.109 Nvme0n1 : 10.00 11166.00 43.62 0.00 0.00 0.00 0.00 0.00 00:17:43.109 [2024-11-25T12:16:38.985Z] =================================================================================================================== 00:17:43.109 [2024-11-25T12:16:38.985Z] Total : 11166.00 43.62 0.00 0.00 0.00 0.00 0.00 00:17:43.109 00:17:43.109 00:17:43.109 Latency(us) 00:17:43.109 [2024-11-25T12:16:38.985Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:43.109 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:43.109 Nvme0n1 : 10.00 11174.67 43.65 0.00 0.00 11450.64 4889.99 304893.64 00:17:43.109 [2024-11-25T12:16:38.985Z] =================================================================================================================== 00:17:43.109 [2024-11-25T12:16:38.985Z] Total : 11174.67 43.65 0.00 0.00 11450.64 4889.99 304893.64 00:17:43.109 { 00:17:43.109 "results": [ 00:17:43.109 { 00:17:43.109 "job": "Nvme0n1", 00:17:43.109 "core_mask": "0x2", 00:17:43.109 "workload": "randwrite", 00:17:43.109 "status": "finished", 00:17:43.109 "queue_depth": 128, 00:17:43.109 "io_size": 4096, 00:17:43.109 "runtime": 10.003694, 00:17:43.109 "iops": 11174.672076135075, 00:17:43.109 "mibps": 43.65106279740264, 00:17:43.109 "io_failed": 0, 00:17:43.109 "io_timeout": 0, 00:17:43.109 "avg_latency_us": 11450.641616466333, 00:17:43.109 "min_latency_us": 4889.993846153846, 00:17:43.109 "max_latency_us": 304893.63692307693 00:17:43.109 } 00:17:43.109 ], 00:17:43.109 "core_count": 1 00:17:43.109 } 00:17:43.367 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 65782 00:17:43.367 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 65782 ']' 00:17:43.367 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 65782 00:17:43.367 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:17:43.367 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:43.367 12:16:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65782 00:17:43.367 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:43.367 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:43.367 killing process with pid 65782 00:17:43.367 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65782' 00:17:43.367 Received shutdown signal, test time was about 10.000000 seconds 00:17:43.367 00:17:43.367 Latency(us) 00:17:43.367 [2024-11-25T12:16:39.243Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:43.367 [2024-11-25T12:16:39.243Z] =================================================================================================================== 00:17:43.367 [2024-11-25T12:16:39.243Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:43.367 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 65782 00:17:43.367 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 65782 00:17:43.367 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:17:43.625 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:43.883 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f908b73-e657-46d8-bc0c-71108ce645eb 00:17:43.883 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:44.141 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:44.141 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:17:44.141 12:16:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:44.399 [2024-11-25 12:16:40.067547] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:44.399 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f908b73-e657-46d8-bc0c-71108ce645eb 00:17:44.399 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:17:44.399 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f908b73-e657-46d8-bc0c-71108ce645eb 00:17:44.399 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:44.399 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:44.399 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:44.399 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:44.399 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:44.399 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:44.399 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:44.399 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:44.399 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f908b73-e657-46d8-bc0c-71108ce645eb 00:17:44.657 2024/11/25 12:16:40 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:5f908b73-e657-46d8-bc0c-71108ce645eb], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:17:44.657 request: 00:17:44.657 { 00:17:44.657 "method": "bdev_lvol_get_lvstores", 00:17:44.657 "params": { 00:17:44.657 "uuid": "5f908b73-e657-46d8-bc0c-71108ce645eb" 00:17:44.657 } 00:17:44.657 } 00:17:44.657 Got JSON-RPC error response 00:17:44.657 GoRPCClient: error on JSON-RPC call 00:17:44.657 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:17:44.657 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:44.657 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:44.658 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:44.658 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:44.916 aio_bdev 00:17:44.916 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 57d7481a-1adc-434b-8e26-d21870923e98 00:17:44.916 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=57d7481a-1adc-434b-8e26-d21870923e98 00:17:44.916 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:44.916 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:17:44.916 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:44.916 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:44.916 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:44.916 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 57d7481a-1adc-434b-8e26-d21870923e98 -t 2000 00:17:45.174 [ 00:17:45.174 { 00:17:45.174 "aliases": [ 00:17:45.174 "lvs/lvol" 00:17:45.174 ], 00:17:45.174 "assigned_rate_limits": { 00:17:45.174 "r_mbytes_per_sec": 0, 00:17:45.174 "rw_ios_per_sec": 0, 00:17:45.174 "rw_mbytes_per_sec": 0, 00:17:45.174 "w_mbytes_per_sec": 0 00:17:45.174 }, 00:17:45.174 "block_size": 4096, 00:17:45.174 "claimed": false, 00:17:45.174 "driver_specific": { 00:17:45.174 "lvol": { 00:17:45.174 "base_bdev": "aio_bdev", 00:17:45.174 "clone": false, 00:17:45.174 "esnap_clone": false, 00:17:45.174 "lvol_store_uuid": "5f908b73-e657-46d8-bc0c-71108ce645eb", 00:17:45.174 "num_allocated_clusters": 38, 00:17:45.174 "snapshot": false, 00:17:45.174 "thin_provision": false 00:17:45.174 } 00:17:45.174 }, 00:17:45.174 "name": "57d7481a-1adc-434b-8e26-d21870923e98", 00:17:45.174 "num_blocks": 38912, 00:17:45.174 "product_name": "Logical Volume", 00:17:45.174 "supported_io_types": { 00:17:45.174 "abort": false, 00:17:45.174 "compare": false, 00:17:45.174 "compare_and_write": false, 00:17:45.174 "copy": false, 00:17:45.174 "flush": false, 00:17:45.174 "get_zone_info": false, 00:17:45.174 "nvme_admin": false, 00:17:45.174 "nvme_io": false, 00:17:45.174 "nvme_io_md": false, 00:17:45.174 "nvme_iov_md": false, 00:17:45.174 "read": true, 00:17:45.174 "reset": true, 00:17:45.174 "seek_data": true, 00:17:45.174 "seek_hole": true, 00:17:45.174 "unmap": true, 00:17:45.174 "write": true, 00:17:45.174 "write_zeroes": true, 00:17:45.174 "zcopy": false, 00:17:45.174 "zone_append": false, 00:17:45.174 "zone_management": false 00:17:45.174 }, 00:17:45.174 "uuid": "57d7481a-1adc-434b-8e26-d21870923e98", 00:17:45.174 "zoned": false 00:17:45.174 } 00:17:45.174 ] 00:17:45.174 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:17:45.174 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f908b73-e657-46d8-bc0c-71108ce645eb 00:17:45.174 12:16:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:45.432 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:45.432 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f908b73-e657-46d8-bc0c-71108ce645eb 00:17:45.432 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:45.690 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:45.690 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 57d7481a-1adc-434b-8e26-d21870923e98 00:17:45.690 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5f908b73-e657-46d8-bc0c-71108ce645eb 00:17:45.948 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:46.205 12:16:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:17:46.771 ************************************ 00:17:46.771 END TEST lvs_grow_clean 00:17:46.771 ************************************ 00:17:46.771 00:17:46.771 real 0m16.972s 00:17:46.771 user 0m16.421s 00:17:46.771 sys 0m1.631s 00:17:46.771 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:46.771 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:46.771 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:46.771 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:46.771 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:46.771 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:46.771 ************************************ 00:17:46.771 START TEST lvs_grow_dirty 00:17:46.771 ************************************ 00:17:46.771 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:17:46.771 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:46.771 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:46.771 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:46.771 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:46.771 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:46.771 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:46.771 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:17:46.771 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:17:46.772 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:47.029 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:47.029 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:47.286 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=7d8fec3a-213d-42c4-89c7-dca521b9c0f5 00:17:47.286 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:47.286 12:16:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d8fec3a-213d-42c4-89c7-dca521b9c0f5 00:17:47.286 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:47.286 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:47.286 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 7d8fec3a-213d-42c4-89c7-dca521b9c0f5 lvol 150 00:17:47.544 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=56204a85-7fe4-4cd5-9c89-713b60b6c442 00:17:47.544 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:17:47.544 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:47.802 [2024-11-25 12:16:43.543756] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:47.802 [2024-11-25 12:16:43.543813] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:47.802 true 00:17:47.802 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d8fec3a-213d-42c4-89c7-dca521b9c0f5 00:17:47.802 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:48.060 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:48.060 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:48.317 12:16:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 56204a85-7fe4-4cd5-9c89-713b60b6c442 00:17:48.317 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:17:48.573 [2024-11-25 12:16:44.388173] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:48.573 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:17:48.830 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=66216 00:17:48.830 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:48.830 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:48.830 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 66216 /var/tmp/bdevperf.sock 00:17:48.830 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 66216 ']' 00:17:48.830 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:48.830 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:48.830 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:48.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:48.830 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:48.830 12:16:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:48.830 [2024-11-25 12:16:44.653783] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:17:48.830 [2024-11-25 12:16:44.653841] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66216 ] 00:17:49.088 [2024-11-25 12:16:44.791863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.088 [2024-11-25 12:16:44.835586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:50.022 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:50.022 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:17:50.022 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:50.022 Nvme0n1 00:17:50.022 12:16:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:50.280 [ 00:17:50.280 { 00:17:50.280 "aliases": [ 00:17:50.280 "56204a85-7fe4-4cd5-9c89-713b60b6c442" 00:17:50.280 ], 00:17:50.280 "assigned_rate_limits": { 00:17:50.280 "r_mbytes_per_sec": 0, 00:17:50.280 "rw_ios_per_sec": 0, 00:17:50.280 "rw_mbytes_per_sec": 0, 00:17:50.280 "w_mbytes_per_sec": 0 00:17:50.280 }, 00:17:50.280 "block_size": 4096, 00:17:50.280 "claimed": false, 00:17:50.280 "driver_specific": { 00:17:50.280 "mp_policy": "active_passive", 00:17:50.280 "nvme": [ 00:17:50.280 { 00:17:50.280 "ctrlr_data": { 00:17:50.280 "ana_reporting": false, 00:17:50.280 "cntlid": 1, 00:17:50.280 "firmware_revision": "25.01", 00:17:50.280 "model_number": "SPDK bdev Controller", 00:17:50.280 "multi_ctrlr": true, 00:17:50.280 "oacs": { 00:17:50.280 "firmware": 0, 00:17:50.280 "format": 0, 00:17:50.280 "ns_manage": 0, 00:17:50.280 "security": 0 00:17:50.280 }, 00:17:50.280 "serial_number": "SPDK0", 00:17:50.280 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:50.280 "vendor_id": "0x8086" 00:17:50.280 }, 00:17:50.280 "ns_data": { 00:17:50.280 "can_share": true, 00:17:50.280 "id": 1 00:17:50.280 }, 00:17:50.280 "trid": { 00:17:50.280 "adrfam": "IPv4", 00:17:50.280 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:50.280 "traddr": "10.0.0.3", 00:17:50.280 "trsvcid": "4420", 00:17:50.280 "trtype": "TCP" 00:17:50.280 }, 00:17:50.280 "vs": { 00:17:50.280 "nvme_version": "1.3" 00:17:50.280 } 00:17:50.280 } 00:17:50.280 ] 00:17:50.280 }, 00:17:50.280 "memory_domains": [ 00:17:50.280 { 00:17:50.280 "dma_device_id": "system", 00:17:50.280 "dma_device_type": 1 00:17:50.280 } 00:17:50.280 ], 00:17:50.280 "name": "Nvme0n1", 00:17:50.280 "num_blocks": 38912, 00:17:50.280 "numa_id": -1, 00:17:50.280 "product_name": "NVMe disk", 00:17:50.280 "supported_io_types": { 00:17:50.280 "abort": true, 00:17:50.280 "compare": true, 00:17:50.280 "compare_and_write": true, 00:17:50.280 "copy": true, 00:17:50.280 "flush": true, 00:17:50.280 "get_zone_info": false, 00:17:50.280 "nvme_admin": true, 00:17:50.280 "nvme_io": true, 00:17:50.280 "nvme_io_md": false, 00:17:50.280 "nvme_iov_md": false, 00:17:50.280 "read": true, 00:17:50.280 "reset": true, 00:17:50.280 "seek_data": false, 00:17:50.280 "seek_hole": false, 00:17:50.280 "unmap": true, 00:17:50.280 "write": true, 00:17:50.280 "write_zeroes": true, 00:17:50.280 "zcopy": false, 00:17:50.280 "zone_append": false, 00:17:50.280 "zone_management": false 00:17:50.280 }, 00:17:50.280 "uuid": "56204a85-7fe4-4cd5-9c89-713b60b6c442", 00:17:50.280 "zoned": false 00:17:50.280 } 00:17:50.280 ] 00:17:50.280 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=66258 00:17:50.280 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:50.280 12:16:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:50.280 Running I/O for 10 seconds... 00:17:51.654 Latency(us) 00:17:51.654 [2024-11-25T12:16:47.530Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:51.654 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:51.654 Nvme0n1 : 1.00 13471.00 52.62 0.00 0.00 0.00 0.00 0.00 00:17:51.654 [2024-11-25T12:16:47.530Z] =================================================================================================================== 00:17:51.654 [2024-11-25T12:16:47.530Z] Total : 13471.00 52.62 0.00 0.00 0.00 0.00 0.00 00:17:51.654 00:17:52.220 12:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7d8fec3a-213d-42c4-89c7-dca521b9c0f5 00:17:52.478 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:52.478 Nvme0n1 : 2.00 13205.50 51.58 0.00 0.00 0.00 0.00 0.00 00:17:52.478 [2024-11-25T12:16:48.354Z] =================================================================================================================== 00:17:52.478 [2024-11-25T12:16:48.354Z] Total : 13205.50 51.58 0.00 0.00 0.00 0.00 0.00 00:17:52.478 00:17:52.478 true 00:17:52.478 12:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:52.478 12:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d8fec3a-213d-42c4-89c7-dca521b9c0f5 00:17:52.735 12:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:52.735 12:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:52.735 12:16:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 66258 00:17:53.300 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:53.300 Nvme0n1 : 3.00 12958.33 50.62 0.00 0.00 0.00 0.00 0.00 00:17:53.300 [2024-11-25T12:16:49.176Z] =================================================================================================================== 00:17:53.300 [2024-11-25T12:16:49.176Z] Total : 12958.33 50.62 0.00 0.00 0.00 0.00 0.00 00:17:53.300 00:17:54.719 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:54.719 Nvme0n1 : 4.00 10605.00 41.43 0.00 0.00 0.00 0.00 0.00 00:17:54.719 [2024-11-25T12:16:50.595Z] =================================================================================================================== 00:17:54.719 [2024-11-25T12:16:50.595Z] Total : 10605.00 41.43 0.00 0.00 0.00 0.00 0.00 00:17:54.719 00:17:55.310 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:55.310 Nvme0n1 : 5.00 9921.00 38.75 0.00 0.00 0.00 0.00 0.00 00:17:55.310 [2024-11-25T12:16:51.186Z] =================================================================================================================== 00:17:55.310 [2024-11-25T12:16:51.186Z] Total : 9921.00 38.75 0.00 0.00 0.00 0.00 0.00 00:17:55.310 00:17:56.679 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:56.679 Nvme0n1 : 6.00 10318.17 40.31 0.00 0.00 0.00 0.00 0.00 00:17:56.679 [2024-11-25T12:16:52.555Z] =================================================================================================================== 00:17:56.679 [2024-11-25T12:16:52.555Z] Total : 10318.17 40.31 0.00 0.00 0.00 0.00 0.00 00:17:56.679 00:17:57.614 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:57.614 Nvme0n1 : 7.00 10654.14 41.62 0.00 0.00 0.00 0.00 0.00 00:17:57.614 [2024-11-25T12:16:53.490Z] =================================================================================================================== 00:17:57.614 [2024-11-25T12:16:53.490Z] Total : 10654.14 41.62 0.00 0.00 0.00 0.00 0.00 00:17:57.614 00:17:58.547 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:58.547 Nvme0n1 : 8.00 10658.62 41.64 0.00 0.00 0.00 0.00 0.00 00:17:58.547 [2024-11-25T12:16:54.423Z] =================================================================================================================== 00:17:58.547 [2024-11-25T12:16:54.423Z] Total : 10658.62 41.64 0.00 0.00 0.00 0.00 0.00 00:17:58.547 00:17:59.479 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:59.479 Nvme0n1 : 9.00 10030.67 39.18 0.00 0.00 0.00 0.00 0.00 00:17:59.479 [2024-11-25T12:16:55.355Z] =================================================================================================================== 00:17:59.479 [2024-11-25T12:16:55.355Z] Total : 10030.67 39.18 0.00 0.00 0.00 0.00 0.00 00:17:59.479 00:18:00.428 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:00.428 Nvme0n1 : 10.00 9293.80 36.30 0.00 0.00 0.00 0.00 0.00 00:18:00.428 [2024-11-25T12:16:56.304Z] =================================================================================================================== 00:18:00.428 [2024-11-25T12:16:56.304Z] Total : 9293.80 36.30 0.00 0.00 0.00 0.00 0.00 00:18:00.428 00:18:00.428 00:18:00.428 Latency(us) 00:18:00.428 [2024-11-25T12:16:56.304Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.428 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:00.428 Nvme0n1 : 10.00 9303.78 36.34 0.00 0.00 13752.38 3528.86 1322818.95 00:18:00.428 [2024-11-25T12:16:56.304Z] =================================================================================================================== 00:18:00.428 [2024-11-25T12:16:56.304Z] Total : 9303.78 36.34 0.00 0.00 13752.38 3528.86 1322818.95 00:18:00.428 { 00:18:00.428 "results": [ 00:18:00.428 { 00:18:00.428 "job": "Nvme0n1", 00:18:00.428 "core_mask": "0x2", 00:18:00.428 "workload": "randwrite", 00:18:00.428 "status": "finished", 00:18:00.428 "queue_depth": 128, 00:18:00.428 "io_size": 4096, 00:18:00.428 "runtime": 10.003036, 00:18:00.428 "iops": 9303.775373796516, 00:18:00.428 "mibps": 36.34287255389264, 00:18:00.428 "io_failed": 0, 00:18:00.428 "io_timeout": 0, 00:18:00.428 "avg_latency_us": 13752.381227152278, 00:18:00.428 "min_latency_us": 3528.8615384615387, 00:18:00.428 "max_latency_us": 1322818.9538461538 00:18:00.428 } 00:18:00.428 ], 00:18:00.428 "core_count": 1 00:18:00.428 } 00:18:00.428 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 66216 00:18:00.428 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 66216 ']' 00:18:00.428 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 66216 00:18:00.428 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:18:00.428 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:00.428 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66216 00:18:00.428 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:00.428 killing process with pid 66216 00:18:00.428 Received shutdown signal, test time was about 10.000000 seconds 00:18:00.428 00:18:00.428 Latency(us) 00:18:00.428 [2024-11-25T12:16:56.304Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.428 [2024-11-25T12:16:56.304Z] =================================================================================================================== 00:18:00.428 [2024-11-25T12:16:56.304Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:00.428 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:00.428 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66216' 00:18:00.428 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 66216 00:18:00.428 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 66216 00:18:00.428 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:18:00.686 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:00.946 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:18:00.946 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d8fec3a-213d-42c4-89c7-dca521b9c0f5 00:18:01.204 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:18:01.204 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:18:01.204 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 65626 00:18:01.204 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 65626 00:18:01.204 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 65626 Killed "${NVMF_APP[@]}" "$@" 00:18:01.204 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:18:01.204 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:18:01.204 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:01.204 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:01.204 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:01.204 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=66426 00:18:01.204 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 66426 00:18:01.204 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 66426 ']' 00:18:01.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.204 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.204 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:01.204 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.204 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:01.204 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:01.204 12:16:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:01.204 [2024-11-25 12:16:57.011189] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:18:01.204 [2024-11-25 12:16:57.011252] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:01.462 [2024-11-25 12:16:57.151325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.462 [2024-11-25 12:16:57.187071] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:01.462 [2024-11-25 12:16:57.187264] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:01.462 [2024-11-25 12:16:57.187660] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:01.462 [2024-11-25 12:16:57.187883] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:01.462 [2024-11-25 12:16:57.187902] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:01.462 [2024-11-25 12:16:57.188184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.393 12:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:02.393 12:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:18:02.393 12:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:02.393 12:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:02.393 12:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:02.393 12:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:02.393 12:16:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:02.393 [2024-11-25 12:16:58.153381] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:18:02.393 [2024-11-25 12:16:58.153766] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:18:02.393 [2024-11-25 12:16:58.153964] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:18:02.393 12:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:18:02.393 12:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 56204a85-7fe4-4cd5-9c89-713b60b6c442 00:18:02.393 12:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=56204a85-7fe4-4cd5-9c89-713b60b6c442 00:18:02.393 12:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:02.393 12:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:18:02.393 12:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:02.393 12:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:02.393 12:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:02.651 12:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 56204a85-7fe4-4cd5-9c89-713b60b6c442 -t 2000 00:18:02.909 [ 00:18:02.909 { 00:18:02.909 "aliases": [ 00:18:02.909 "lvs/lvol" 00:18:02.909 ], 00:18:02.909 "assigned_rate_limits": { 00:18:02.909 "r_mbytes_per_sec": 0, 00:18:02.909 "rw_ios_per_sec": 0, 00:18:02.909 "rw_mbytes_per_sec": 0, 00:18:02.909 "w_mbytes_per_sec": 0 00:18:02.909 }, 00:18:02.909 "block_size": 4096, 00:18:02.909 "claimed": false, 00:18:02.909 "driver_specific": { 00:18:02.909 "lvol": { 00:18:02.909 "base_bdev": "aio_bdev", 00:18:02.909 "clone": false, 00:18:02.909 "esnap_clone": false, 00:18:02.909 "lvol_store_uuid": "7d8fec3a-213d-42c4-89c7-dca521b9c0f5", 00:18:02.909 "num_allocated_clusters": 38, 00:18:02.909 "snapshot": false, 00:18:02.909 "thin_provision": false 00:18:02.909 } 00:18:02.909 }, 00:18:02.909 "name": "56204a85-7fe4-4cd5-9c89-713b60b6c442", 00:18:02.909 "num_blocks": 38912, 00:18:02.909 "product_name": "Logical Volume", 00:18:02.909 "supported_io_types": { 00:18:02.909 "abort": false, 00:18:02.909 "compare": false, 00:18:02.909 "compare_and_write": false, 00:18:02.909 "copy": false, 00:18:02.909 "flush": false, 00:18:02.909 "get_zone_info": false, 00:18:02.909 "nvme_admin": false, 00:18:02.909 "nvme_io": false, 00:18:02.909 "nvme_io_md": false, 00:18:02.909 "nvme_iov_md": false, 00:18:02.909 "read": true, 00:18:02.909 "reset": true, 00:18:02.909 "seek_data": true, 00:18:02.909 "seek_hole": true, 00:18:02.909 "unmap": true, 00:18:02.909 "write": true, 00:18:02.909 "write_zeroes": true, 00:18:02.909 "zcopy": false, 00:18:02.909 "zone_append": false, 00:18:02.909 "zone_management": false 00:18:02.909 }, 00:18:02.909 "uuid": "56204a85-7fe4-4cd5-9c89-713b60b6c442", 00:18:02.909 "zoned": false 00:18:02.909 } 00:18:02.909 ] 00:18:02.909 12:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:18:02.909 12:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d8fec3a-213d-42c4-89c7-dca521b9c0f5 00:18:02.909 12:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:18:03.167 12:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:18:03.167 12:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d8fec3a-213d-42c4-89c7-dca521b9c0f5 00:18:03.167 12:16:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:18:03.426 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:18:03.426 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:03.426 [2024-11-25 12:16:59.291455] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:03.684 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d8fec3a-213d-42c4-89c7-dca521b9c0f5 00:18:03.684 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:18:03.684 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d8fec3a-213d-42c4-89c7-dca521b9c0f5 00:18:03.684 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:03.684 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:03.684 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:03.684 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:03.684 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:03.684 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:03.684 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:03.684 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:03.684 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d8fec3a-213d-42c4-89c7-dca521b9c0f5 00:18:03.684 2024/11/25 12:16:59 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:7d8fec3a-213d-42c4-89c7-dca521b9c0f5], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:18:03.684 request: 00:18:03.684 { 00:18:03.684 "method": "bdev_lvol_get_lvstores", 00:18:03.684 "params": { 00:18:03.684 "uuid": "7d8fec3a-213d-42c4-89c7-dca521b9c0f5" 00:18:03.684 } 00:18:03.684 } 00:18:03.684 Got JSON-RPC error response 00:18:03.684 GoRPCClient: error on JSON-RPC call 00:18:03.684 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:18:03.684 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:03.684 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:03.684 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:03.684 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:03.944 aio_bdev 00:18:03.944 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 56204a85-7fe4-4cd5-9c89-713b60b6c442 00:18:03.944 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=56204a85-7fe4-4cd5-9c89-713b60b6c442 00:18:03.944 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:03.944 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:18:03.944 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:03.944 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:03.944 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:04.202 12:16:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 56204a85-7fe4-4cd5-9c89-713b60b6c442 -t 2000 00:18:04.461 [ 00:18:04.461 { 00:18:04.461 "aliases": [ 00:18:04.461 "lvs/lvol" 00:18:04.461 ], 00:18:04.461 "assigned_rate_limits": { 00:18:04.461 "r_mbytes_per_sec": 0, 00:18:04.461 "rw_ios_per_sec": 0, 00:18:04.461 "rw_mbytes_per_sec": 0, 00:18:04.461 "w_mbytes_per_sec": 0 00:18:04.461 }, 00:18:04.461 "block_size": 4096, 00:18:04.461 "claimed": false, 00:18:04.461 "driver_specific": { 00:18:04.461 "lvol": { 00:18:04.461 "base_bdev": "aio_bdev", 00:18:04.461 "clone": false, 00:18:04.461 "esnap_clone": false, 00:18:04.461 "lvol_store_uuid": "7d8fec3a-213d-42c4-89c7-dca521b9c0f5", 00:18:04.461 "num_allocated_clusters": 38, 00:18:04.461 "snapshot": false, 00:18:04.461 "thin_provision": false 00:18:04.461 } 00:18:04.461 }, 00:18:04.461 "name": "56204a85-7fe4-4cd5-9c89-713b60b6c442", 00:18:04.461 "num_blocks": 38912, 00:18:04.461 "product_name": "Logical Volume", 00:18:04.461 "supported_io_types": { 00:18:04.461 "abort": false, 00:18:04.461 "compare": false, 00:18:04.461 "compare_and_write": false, 00:18:04.461 "copy": false, 00:18:04.461 "flush": false, 00:18:04.461 "get_zone_info": false, 00:18:04.461 "nvme_admin": false, 00:18:04.461 "nvme_io": false, 00:18:04.461 "nvme_io_md": false, 00:18:04.461 "nvme_iov_md": false, 00:18:04.461 "read": true, 00:18:04.461 "reset": true, 00:18:04.461 "seek_data": true, 00:18:04.461 "seek_hole": true, 00:18:04.461 "unmap": true, 00:18:04.461 "write": true, 00:18:04.461 "write_zeroes": true, 00:18:04.461 "zcopy": false, 00:18:04.461 "zone_append": false, 00:18:04.461 "zone_management": false 00:18:04.461 }, 00:18:04.461 "uuid": "56204a85-7fe4-4cd5-9c89-713b60b6c442", 00:18:04.461 "zoned": false 00:18:04.461 } 00:18:04.461 ] 00:18:04.461 12:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:18:04.461 12:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d8fec3a-213d-42c4-89c7-dca521b9c0f5 00:18:04.461 12:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:18:04.719 12:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:18:04.719 12:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7d8fec3a-213d-42c4-89c7-dca521b9c0f5 00:18:04.719 12:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:18:04.977 12:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:18:04.977 12:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 56204a85-7fe4-4cd5-9c89-713b60b6c442 00:18:04.977 12:17:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7d8fec3a-213d-42c4-89c7-dca521b9c0f5 00:18:05.236 12:17:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:05.494 12:17:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:18:05.751 ************************************ 00:18:05.751 END TEST lvs_grow_dirty 00:18:05.751 ************************************ 00:18:05.751 00:18:05.751 real 0m19.161s 00:18:05.751 user 0m41.583s 00:18:05.751 sys 0m5.144s 00:18:05.751 12:17:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:05.751 12:17:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:18:06.009 12:17:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:18:06.009 12:17:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:18:06.009 12:17:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:18:06.009 12:17:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:18:06.009 12:17:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:06.009 12:17:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:18:06.009 12:17:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:18:06.009 12:17:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:18:06.009 12:17:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:06.009 nvmf_trace.0 00:18:06.009 12:17:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:18:06.009 12:17:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:18:06.009 12:17:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:06.009 12:17:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:18:07.382 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:07.382 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:18:07.382 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:07.382 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:07.382 rmmod nvme_tcp 00:18:07.382 rmmod nvme_fabrics 00:18:07.382 rmmod nvme_keyring 00:18:07.382 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:07.382 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:18:07.382 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:18:07.382 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 66426 ']' 00:18:07.382 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 66426 00:18:07.382 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 66426 ']' 00:18:07.382 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 66426 00:18:07.382 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:18:07.382 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:07.382 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66426 00:18:07.382 killing process with pid 66426 00:18:07.382 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:07.382 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:07.382 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66426' 00:18:07.382 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 66426 00:18:07.382 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 66426 00:18:07.382 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:07.382 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:07.382 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:07.382 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:18:07.382 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:18:07.382 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:18:07.382 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:07.383 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:07.383 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:07.383 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:07.383 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:07.383 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:07.383 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:07.641 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:07.641 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:07.641 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:07.641 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:07.641 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:07.641 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:07.641 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:07.641 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:07.641 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:07.641 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:07.641 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.641 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:07.641 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:07.641 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:18:07.641 00:18:07.641 real 0m39.707s 00:18:07.641 user 1m4.834s 00:18:07.641 sys 0m8.608s 00:18:07.641 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:07.641 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:18:07.641 ************************************ 00:18:07.641 END TEST nvmf_lvs_grow 00:18:07.641 ************************************ 00:18:07.641 12:17:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:07.641 12:17:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:07.641 12:17:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:07.641 12:17:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:18:07.641 ************************************ 00:18:07.641 START TEST nvmf_bdev_io_wait 00:18:07.641 ************************************ 00:18:07.641 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:07.900 * Looking for test storage... 00:18:07.900 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:07.900 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:07.900 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:18:07.900 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:07.900 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:07.900 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:07.900 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:07.900 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:07.900 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:18:07.900 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:18:07.900 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:18:07.900 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:18:07.900 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:18:07.900 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:18:07.900 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:18:07.900 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:07.900 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:18:07.900 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:18:07.900 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:07.900 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:07.900 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:18:07.900 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:18:07.900 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:07.900 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:18:07.900 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:18:07.900 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:18:07.900 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:18:07.900 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:07.900 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:18:07.900 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:18:07.900 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:07.900 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:07.900 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:18:07.900 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:07.900 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:07.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.900 --rc genhtml_branch_coverage=1 00:18:07.900 --rc genhtml_function_coverage=1 00:18:07.900 --rc genhtml_legend=1 00:18:07.900 --rc geninfo_all_blocks=1 00:18:07.900 --rc geninfo_unexecuted_blocks=1 00:18:07.900 00:18:07.900 ' 00:18:07.900 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:07.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.901 --rc genhtml_branch_coverage=1 00:18:07.901 --rc genhtml_function_coverage=1 00:18:07.901 --rc genhtml_legend=1 00:18:07.901 --rc geninfo_all_blocks=1 00:18:07.901 --rc geninfo_unexecuted_blocks=1 00:18:07.901 00:18:07.901 ' 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:07.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.901 --rc genhtml_branch_coverage=1 00:18:07.901 --rc genhtml_function_coverage=1 00:18:07.901 --rc genhtml_legend=1 00:18:07.901 --rc geninfo_all_blocks=1 00:18:07.901 --rc geninfo_unexecuted_blocks=1 00:18:07.901 00:18:07.901 ' 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:07.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.901 --rc genhtml_branch_coverage=1 00:18:07.901 --rc genhtml_function_coverage=1 00:18:07.901 --rc genhtml_legend=1 00:18:07.901 --rc geninfo_all_blocks=1 00:18:07.901 --rc geninfo_unexecuted_blocks=1 00:18:07.901 00:18:07.901 ' 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:07.901 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:07.901 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:07.902 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:07.902 Cannot find device "nvmf_init_br" 00:18:07.902 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:18:07.902 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:07.902 Cannot find device "nvmf_init_br2" 00:18:07.902 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:18:07.902 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:07.902 Cannot find device "nvmf_tgt_br" 00:18:07.902 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:18:07.902 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:07.902 Cannot find device "nvmf_tgt_br2" 00:18:07.902 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:18:07.902 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:07.902 Cannot find device "nvmf_init_br" 00:18:07.902 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:18:07.902 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:07.902 Cannot find device "nvmf_init_br2" 00:18:07.902 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:18:07.902 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:07.902 Cannot find device "nvmf_tgt_br" 00:18:07.902 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:18:07.902 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:07.902 Cannot find device "nvmf_tgt_br2" 00:18:07.902 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:18:07.902 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:07.902 Cannot find device "nvmf_br" 00:18:07.902 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:18:07.902 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:07.902 Cannot find device "nvmf_init_if" 00:18:07.902 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:18:07.902 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:07.902 Cannot find device "nvmf_init_if2" 00:18:07.902 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:18:07.902 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:07.902 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:07.902 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:18:07.902 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:07.902 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:07.902 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:18:07.902 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:07.902 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:07.902 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:07.902 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:07.902 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:07.902 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:08.161 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:08.161 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:08.161 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:08.161 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:08.161 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:08.161 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:08.161 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:08.161 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:08.161 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:08.161 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:08.161 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:08.161 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:08.161 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:08.161 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:08.161 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:08.161 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:08.161 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:08.161 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:08.161 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:08.161 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:08.161 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:08.161 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:08.161 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:08.161 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:08.161 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:08.161 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:08.161 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:08.161 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:08.161 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:18:08.161 00:18:08.161 --- 10.0.0.3 ping statistics --- 00:18:08.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:08.162 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:18:08.162 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:08.162 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:08.162 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.026 ms 00:18:08.162 00:18:08.162 --- 10.0.0.4 ping statistics --- 00:18:08.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:08.162 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:18:08.162 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:08.162 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:08.162 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.014 ms 00:18:08.162 00:18:08.162 --- 10.0.0.1 ping statistics --- 00:18:08.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:08.162 rtt min/avg/max/mdev = 0.014/0.014/0.014/0.000 ms 00:18:08.162 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:08.162 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:08.162 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.029 ms 00:18:08.162 00:18:08.162 --- 10.0.0.2 ping statistics --- 00:18:08.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:08.162 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:18:08.162 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:08.162 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:18:08.162 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:08.162 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:08.162 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:08.162 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:08.162 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:08.162 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:08.162 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:08.162 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:08.162 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:08.162 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:08.162 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:08.162 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=66901 00:18:08.162 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 66901 00:18:08.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:08.162 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 66901 ']' 00:18:08.162 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.162 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:08.162 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.162 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:08.162 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:08.162 12:17:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:08.162 [2024-11-25 12:17:03.944420] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:18:08.162 [2024-11-25 12:17:03.944484] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:08.420 [2024-11-25 12:17:04.085197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:08.420 [2024-11-25 12:17:04.122053] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:08.420 [2024-11-25 12:17:04.122212] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:08.420 [2024-11-25 12:17:04.122273] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:08.420 [2024-11-25 12:17:04.122300] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:08.420 [2024-11-25 12:17:04.122335] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:08.420 [2024-11-25 12:17:04.123041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:08.420 [2024-11-25 12:17:04.123123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:08.420 [2024-11-25 12:17:04.123230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.420 [2024-11-25 12:17:04.123191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:08.986 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:08.987 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:18:08.987 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:08.987 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:08.987 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:08.987 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:08.987 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:18:08.987 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.987 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:08.987 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.987 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:18:08.987 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.987 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:09.247 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.247 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:09.247 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.247 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:09.247 [2024-11-25 12:17:04.904344] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:09.247 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.247 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:09.247 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.247 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:09.247 Malloc0 00:18:09.247 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.247 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:09.247 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.247 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:09.247 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.247 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:09.247 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.247 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:09.247 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.247 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:09.247 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.247 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:09.247 [2024-11-25 12:17:04.950800] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:09.247 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.247 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=66955 00:18:09.247 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=66957 00:18:09.247 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:18:09.247 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=66958 00:18:09.247 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:18:09.247 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:18:09.247 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:18:09.247 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:18:09.247 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:18:09.247 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=66961 00:18:09.247 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:18:09.247 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:09.247 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:18:09.247 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:09.247 { 00:18:09.247 "params": { 00:18:09.247 "name": "Nvme$subsystem", 00:18:09.247 "trtype": "$TEST_TRANSPORT", 00:18:09.247 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:09.247 "adrfam": "ipv4", 00:18:09.247 "trsvcid": "$NVMF_PORT", 00:18:09.247 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:09.247 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:09.247 "hdgst": ${hdgst:-false}, 00:18:09.247 "ddgst": ${ddgst:-false} 00:18:09.247 }, 00:18:09.247 "method": "bdev_nvme_attach_controller" 00:18:09.247 } 00:18:09.247 EOF 00:18:09.247 )") 00:18:09.247 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:18:09.247 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:18:09.247 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:09.247 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:09.247 { 00:18:09.247 "params": { 00:18:09.247 "name": "Nvme$subsystem", 00:18:09.248 "trtype": "$TEST_TRANSPORT", 00:18:09.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:09.248 "adrfam": "ipv4", 00:18:09.248 "trsvcid": "$NVMF_PORT", 00:18:09.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:09.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:09.248 "hdgst": ${hdgst:-false}, 00:18:09.248 "ddgst": ${ddgst:-false} 00:18:09.248 }, 00:18:09.248 "method": "bdev_nvme_attach_controller" 00:18:09.248 } 00:18:09.248 EOF 00:18:09.248 )") 00:18:09.248 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:18:09.248 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:18:09.248 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:18:09.248 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:18:09.248 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:18:09.248 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:09.248 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:09.248 { 00:18:09.248 "params": { 00:18:09.248 "name": "Nvme$subsystem", 00:18:09.248 "trtype": "$TEST_TRANSPORT", 00:18:09.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:09.248 "adrfam": "ipv4", 00:18:09.248 "trsvcid": "$NVMF_PORT", 00:18:09.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:09.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:09.248 "hdgst": ${hdgst:-false}, 00:18:09.248 "ddgst": ${ddgst:-false} 00:18:09.248 }, 00:18:09.248 "method": "bdev_nvme_attach_controller" 00:18:09.248 } 00:18:09.248 EOF 00:18:09.248 )") 00:18:09.248 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:18:09.248 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:18:09.248 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:18:09.248 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:18:09.248 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:18:09.248 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:18:09.248 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:18:09.248 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:09.248 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:09.248 { 00:18:09.248 "params": { 00:18:09.248 "name": "Nvme$subsystem", 00:18:09.248 "trtype": "$TEST_TRANSPORT", 00:18:09.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:09.248 "adrfam": "ipv4", 00:18:09.248 "trsvcid": "$NVMF_PORT", 00:18:09.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:09.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:09.248 "hdgst": ${hdgst:-false}, 00:18:09.248 "ddgst": ${ddgst:-false} 00:18:09.248 }, 00:18:09.248 "method": "bdev_nvme_attach_controller" 00:18:09.248 } 00:18:09.248 EOF 00:18:09.248 )") 00:18:09.248 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:18:09.248 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:09.248 "params": { 00:18:09.248 "name": "Nvme1", 00:18:09.248 "trtype": "tcp", 00:18:09.248 "traddr": "10.0.0.3", 00:18:09.248 "adrfam": "ipv4", 00:18:09.248 "trsvcid": "4420", 00:18:09.248 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:09.248 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:09.248 "hdgst": false, 00:18:09.248 "ddgst": false 00:18:09.248 }, 00:18:09.248 "method": "bdev_nvme_attach_controller" 00:18:09.248 }' 00:18:09.248 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:18:09.248 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:09.248 "params": { 00:18:09.248 "name": "Nvme1", 00:18:09.248 "trtype": "tcp", 00:18:09.248 "traddr": "10.0.0.3", 00:18:09.248 "adrfam": "ipv4", 00:18:09.248 "trsvcid": "4420", 00:18:09.248 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:09.248 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:09.248 "hdgst": false, 00:18:09.248 "ddgst": false 00:18:09.248 }, 00:18:09.248 "method": "bdev_nvme_attach_controller" 00:18:09.248 }' 00:18:09.248 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:18:09.248 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:18:09.248 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:18:09.248 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:09.248 "params": { 00:18:09.248 "name": "Nvme1", 00:18:09.248 "trtype": "tcp", 00:18:09.248 "traddr": "10.0.0.3", 00:18:09.248 "adrfam": "ipv4", 00:18:09.248 "trsvcid": "4420", 00:18:09.248 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:09.248 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:09.248 "hdgst": false, 00:18:09.248 "ddgst": false 00:18:09.248 }, 00:18:09.248 "method": "bdev_nvme_attach_controller" 00:18:09.248 }' 00:18:09.248 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:18:09.248 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:18:09.248 12:17:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:09.248 "params": { 00:18:09.248 "name": "Nvme1", 00:18:09.248 "trtype": "tcp", 00:18:09.248 "traddr": "10.0.0.3", 00:18:09.248 "adrfam": "ipv4", 00:18:09.248 "trsvcid": "4420", 00:18:09.248 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:09.248 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:09.248 "hdgst": false, 00:18:09.248 "ddgst": false 00:18:09.248 }, 00:18:09.248 "method": "bdev_nvme_attach_controller" 00:18:09.248 }' 00:18:09.248 [2024-11-25 12:17:04.988688] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:18:09.248 [2024-11-25 12:17:04.989038] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:18:09.248 [2024-11-25 12:17:04.996080] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:18:09.248 [2024-11-25 12:17:04.996431] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:18:09.248 [2024-11-25 12:17:05.004101] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:18:09.248 [2024-11-25 12:17:05.004149] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:18:09.248 12:17:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 66955 00:18:09.248 [2024-11-25 12:17:05.013010] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:18:09.248 [2024-11-25 12:17:05.013064] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:18:09.508 [2024-11-25 12:17:05.160248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.508 [2024-11-25 12:17:05.188541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:18:09.508 [2024-11-25 12:17:05.207260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.508 [2024-11-25 12:17:05.236164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:18:09.508 [2024-11-25 12:17:05.250265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.508 [2024-11-25 12:17:05.280218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:18:09.508 [2024-11-25 12:17:05.293970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.508 Running I/O for 1 seconds... 00:18:09.508 [2024-11-25 12:17:05.329964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:18:09.508 Running I/O for 1 seconds... 00:18:09.767 Running I/O for 1 seconds... 00:18:09.767 Running I/O for 1 seconds... 00:18:10.700 193072.00 IOPS, 754.19 MiB/s 00:18:10.700 Latency(us) 00:18:10.700 [2024-11-25T12:17:06.576Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.700 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:18:10.700 Nvme1n1 : 1.00 192710.96 752.78 0.00 0.00 660.26 289.87 1840.05 00:18:10.700 [2024-11-25T12:17:06.576Z] =================================================================================================================== 00:18:10.700 [2024-11-25T12:17:06.576Z] Total : 192710.96 752.78 0.00 0.00 660.26 289.87 1840.05 00:18:10.700 12793.00 IOPS, 49.97 MiB/s 00:18:10.700 Latency(us) 00:18:10.700 [2024-11-25T12:17:06.576Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.700 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:18:10.700 Nvme1n1 : 1.01 12839.61 50.15 0.00 0.00 9936.54 4965.61 18854.20 00:18:10.700 [2024-11-25T12:17:06.576Z] =================================================================================================================== 00:18:10.700 [2024-11-25T12:17:06.576Z] Total : 12839.61 50.15 0.00 0.00 9936.54 4965.61 18854.20 00:18:10.700 11673.00 IOPS, 45.60 MiB/s 00:18:10.700 Latency(us) 00:18:10.700 [2024-11-25T12:17:06.576Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.700 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:18:10.700 Nvme1n1 : 1.01 11742.30 45.87 0.00 0.00 10865.86 4360.66 21374.82 00:18:10.700 [2024-11-25T12:17:06.576Z] =================================================================================================================== 00:18:10.700 [2024-11-25T12:17:06.576Z] Total : 11742.30 45.87 0.00 0.00 10865.86 4360.66 21374.82 00:18:10.700 11956.00 IOPS, 46.70 MiB/s 00:18:10.701 Latency(us) 00:18:10.701 [2024-11-25T12:17:06.577Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:10.701 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:18:10.701 Nvme1n1 : 1.01 12041.13 47.04 0.00 0.00 10598.19 4133.81 22080.59 00:18:10.701 [2024-11-25T12:17:06.577Z] =================================================================================================================== 00:18:10.701 [2024-11-25T12:17:06.577Z] Total : 12041.13 47.04 0.00 0.00 10598.19 4133.81 22080.59 00:18:10.701 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 66957 00:18:10.701 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 66958 00:18:10.701 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 66961 00:18:10.959 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:10.959 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.959 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:10.959 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.959 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:18:10.959 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:18:10.959 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:10.959 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:18:10.959 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:10.959 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:18:10.959 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:10.959 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:10.959 rmmod nvme_tcp 00:18:10.959 rmmod nvme_fabrics 00:18:10.959 rmmod nvme_keyring 00:18:10.959 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:10.959 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:18:10.959 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:18:10.959 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 66901 ']' 00:18:10.960 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 66901 00:18:10.960 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 66901 ']' 00:18:10.960 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 66901 00:18:10.960 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:18:10.960 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:10.960 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66901 00:18:10.960 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:10.960 killing process with pid 66901 00:18:10.960 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:10.960 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66901' 00:18:10.960 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 66901 00:18:10.960 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 66901 00:18:10.960 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:10.960 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:10.960 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:10.960 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:18:10.960 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:18:10.960 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:10.960 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:18:10.960 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:10.960 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:10.960 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:11.218 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:11.218 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:11.218 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:11.218 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:11.218 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:11.218 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:11.218 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:11.218 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:11.218 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:11.218 12:17:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:11.218 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:11.218 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:11.218 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:11.218 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:11.218 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:11.218 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:11.218 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:18:11.218 ************************************ 00:18:11.218 END TEST nvmf_bdev_io_wait 00:18:11.218 ************************************ 00:18:11.218 00:18:11.219 real 0m3.607s 00:18:11.219 user 0m15.271s 00:18:11.219 sys 0m1.492s 00:18:11.219 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:11.219 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:18:11.485 12:17:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:11.485 12:17:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:11.485 12:17:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:11.485 12:17:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:18:11.485 ************************************ 00:18:11.485 START TEST nvmf_queue_depth 00:18:11.485 ************************************ 00:18:11.485 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:11.485 * Looking for test storage... 00:18:11.485 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:11.485 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:11.485 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:18:11.485 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:11.485 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:11.485 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:11.485 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:11.485 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:11.485 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:18:11.485 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:18:11.485 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:18:11.485 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:18:11.485 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:18:11.485 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:18:11.485 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:18:11.485 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:11.485 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:18:11.485 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:18:11.485 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:11.485 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:11.485 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:18:11.485 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:18:11.485 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:11.485 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:18:11.485 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:18:11.485 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:18:11.485 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:18:11.485 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:11.485 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:18:11.485 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:18:11.485 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:11.485 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:11.485 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:18:11.485 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:11.485 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:11.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.485 --rc genhtml_branch_coverage=1 00:18:11.485 --rc genhtml_function_coverage=1 00:18:11.485 --rc genhtml_legend=1 00:18:11.485 --rc geninfo_all_blocks=1 00:18:11.485 --rc geninfo_unexecuted_blocks=1 00:18:11.485 00:18:11.485 ' 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:11.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.486 --rc genhtml_branch_coverage=1 00:18:11.486 --rc genhtml_function_coverage=1 00:18:11.486 --rc genhtml_legend=1 00:18:11.486 --rc geninfo_all_blocks=1 00:18:11.486 --rc geninfo_unexecuted_blocks=1 00:18:11.486 00:18:11.486 ' 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:11.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.486 --rc genhtml_branch_coverage=1 00:18:11.486 --rc genhtml_function_coverage=1 00:18:11.486 --rc genhtml_legend=1 00:18:11.486 --rc geninfo_all_blocks=1 00:18:11.486 --rc geninfo_unexecuted_blocks=1 00:18:11.486 00:18:11.486 ' 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:11.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:11.486 --rc genhtml_branch_coverage=1 00:18:11.486 --rc genhtml_function_coverage=1 00:18:11.486 --rc genhtml_legend=1 00:18:11.486 --rc geninfo_all_blocks=1 00:18:11.486 --rc geninfo_unexecuted_blocks=1 00:18:11.486 00:18:11.486 ' 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:11.486 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:11.486 Cannot find device "nvmf_init_br" 00:18:11.486 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:18:11.487 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:11.487 Cannot find device "nvmf_init_br2" 00:18:11.487 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:18:11.487 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:11.487 Cannot find device "nvmf_tgt_br" 00:18:11.487 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:18:11.487 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:11.487 Cannot find device "nvmf_tgt_br2" 00:18:11.487 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:18:11.487 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:11.487 Cannot find device "nvmf_init_br" 00:18:11.487 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:18:11.487 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:11.747 Cannot find device "nvmf_init_br2" 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:11.747 Cannot find device "nvmf_tgt_br" 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:11.747 Cannot find device "nvmf_tgt_br2" 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:11.747 Cannot find device "nvmf_br" 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:11.747 Cannot find device "nvmf_init_if" 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:11.747 Cannot find device "nvmf_init_if2" 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:11.747 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:11.747 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:11.747 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:11.747 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:18:11.747 00:18:11.747 --- 10.0.0.3 ping statistics --- 00:18:11.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.747 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:11.747 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:11.747 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.033 ms 00:18:11.747 00:18:11.747 --- 10.0.0.4 ping statistics --- 00:18:11.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.747 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:11.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:11.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:18:11.747 00:18:11.747 --- 10.0.0.1 ping statistics --- 00:18:11.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.747 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:11.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:11.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.044 ms 00:18:11.747 00:18:11.747 --- 10.0.0.2 ping statistics --- 00:18:11.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.747 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:11.747 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=67221 00:18:12.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:12.005 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 67221 00:18:12.005 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 67221 ']' 00:18:12.005 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:12.005 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:12.005 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:12.005 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:12.005 12:17:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:12.005 [2024-11-25 12:17:07.649159] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:18:12.005 [2024-11-25 12:17:07.649322] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:12.005 [2024-11-25 12:17:07.786730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.005 [2024-11-25 12:17:07.821440] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:12.005 [2024-11-25 12:17:07.821605] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:12.005 [2024-11-25 12:17:07.821664] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:12.005 [2024-11-25 12:17:07.822086] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:12.005 [2024-11-25 12:17:07.822208] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:12.005 [2024-11-25 12:17:07.822494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:12.939 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:12.939 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:18:12.939 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:12.939 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:12.939 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:12.939 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:12.939 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:12.939 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.939 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:12.939 [2024-11-25 12:17:08.585515] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:12.939 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.939 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:12.939 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.939 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:12.939 Malloc0 00:18:12.939 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.939 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:12.939 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.939 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:12.939 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.939 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:12.939 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.939 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:12.939 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.939 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:12.939 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.939 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:12.939 [2024-11-25 12:17:08.624228] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:12.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:12.939 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.939 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=67271 00:18:12.939 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:12.939 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 67271 /var/tmp/bdevperf.sock 00:18:12.939 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 67271 ']' 00:18:12.939 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:12.939 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:18:12.939 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:12.939 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:12.939 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:12.939 12:17:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:12.939 [2024-11-25 12:17:08.668453] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:18:12.939 [2024-11-25 12:17:08.668741] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67271 ] 00:18:12.939 [2024-11-25 12:17:08.810869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.197 [2024-11-25 12:17:08.847663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:13.769 12:17:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:13.769 12:17:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:18:13.769 12:17:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:13.769 12:17:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.769 12:17:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:14.051 NVMe0n1 00:18:14.051 12:17:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.051 12:17:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:14.051 Running I/O for 10 seconds... 00:18:15.919 8318.00 IOPS, 32.49 MiB/s [2024-11-25T12:17:13.171Z] 9007.00 IOPS, 35.18 MiB/s [2024-11-25T12:17:14.106Z] 9756.33 IOPS, 38.11 MiB/s [2024-11-25T12:17:15.052Z] 10244.50 IOPS, 40.02 MiB/s [2024-11-25T12:17:15.985Z] 10568.00 IOPS, 41.28 MiB/s [2024-11-25T12:17:16.917Z] 10750.33 IOPS, 41.99 MiB/s [2024-11-25T12:17:17.851Z] 10922.00 IOPS, 42.66 MiB/s [2024-11-25T12:17:18.795Z] 11030.50 IOPS, 43.09 MiB/s [2024-11-25T12:17:20.164Z] 11143.56 IOPS, 43.53 MiB/s [2024-11-25T12:17:20.164Z] 11231.60 IOPS, 43.87 MiB/s 00:18:24.288 Latency(us) 00:18:24.288 [2024-11-25T12:17:20.164Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.288 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:24.288 Verification LBA range: start 0x0 length 0x4000 00:18:24.288 NVMe0n1 : 10.07 11254.20 43.96 0.00 0.00 90648.56 23088.84 105664.20 00:18:24.288 [2024-11-25T12:17:20.164Z] =================================================================================================================== 00:18:24.288 [2024-11-25T12:17:20.164Z] Total : 11254.20 43.96 0.00 0.00 90648.56 23088.84 105664.20 00:18:24.288 { 00:18:24.288 "results": [ 00:18:24.288 { 00:18:24.288 "job": "NVMe0n1", 00:18:24.288 "core_mask": "0x1", 00:18:24.288 "workload": "verify", 00:18:24.288 "status": "finished", 00:18:24.288 "verify_range": { 00:18:24.288 "start": 0, 00:18:24.288 "length": 16384 00:18:24.288 }, 00:18:24.288 "queue_depth": 1024, 00:18:24.288 "io_size": 4096, 00:18:24.288 "runtime": 10.070905, 00:18:24.288 "iops": 11254.202080150691, 00:18:24.288 "mibps": 43.96172687558864, 00:18:24.288 "io_failed": 0, 00:18:24.288 "io_timeout": 0, 00:18:24.288 "avg_latency_us": 90648.55748568635, 00:18:24.288 "min_latency_us": 23088.836923076924, 00:18:24.288 "max_latency_us": 105664.19692307692 00:18:24.288 } 00:18:24.288 ], 00:18:24.288 "core_count": 1 00:18:24.288 } 00:18:24.288 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 67271 00:18:24.288 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 67271 ']' 00:18:24.288 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 67271 00:18:24.288 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:18:24.288 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:24.288 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67271 00:18:24.288 killing process with pid 67271 00:18:24.288 Received shutdown signal, test time was about 10.000000 seconds 00:18:24.288 00:18:24.288 Latency(us) 00:18:24.288 [2024-11-25T12:17:20.164Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:24.288 [2024-11-25T12:17:20.164Z] =================================================================================================================== 00:18:24.288 [2024-11-25T12:17:20.164Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:24.288 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:24.288 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:24.288 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67271' 00:18:24.288 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 67271 00:18:24.288 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 67271 00:18:24.288 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:24.288 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:24.288 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:24.288 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:18:24.288 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:24.288 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:18:24.288 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:24.288 12:17:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:24.288 rmmod nvme_tcp 00:18:24.288 rmmod nvme_fabrics 00:18:24.288 rmmod nvme_keyring 00:18:24.288 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:24.288 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:18:24.288 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:18:24.288 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 67221 ']' 00:18:24.288 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 67221 00:18:24.288 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 67221 ']' 00:18:24.288 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 67221 00:18:24.288 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:18:24.288 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:24.288 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67221 00:18:24.288 killing process with pid 67221 00:18:24.288 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:24.288 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:24.288 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67221' 00:18:24.288 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 67221 00:18:24.288 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 67221 00:18:24.545 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:24.545 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:24.545 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:24.545 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:18:24.545 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:18:24.545 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:24.545 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:18:24.545 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:24.545 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:24.545 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:24.545 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:24.545 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:24.545 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:24.545 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:24.545 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:24.545 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:24.546 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:24.546 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:24.546 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:24.546 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:24.546 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:24.546 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:24.546 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:24.546 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:24.546 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:24.546 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:24.546 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:18:24.546 00:18:24.546 real 0m13.264s 00:18:24.546 user 0m23.306s 00:18:24.546 sys 0m1.488s 00:18:24.546 ************************************ 00:18:24.546 END TEST nvmf_queue_depth 00:18:24.546 ************************************ 00:18:24.546 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:24.546 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:24.546 12:17:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:24.546 12:17:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:24.546 12:17:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:24.546 12:17:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:18:24.804 ************************************ 00:18:24.804 START TEST nvmf_target_multipath 00:18:24.804 ************************************ 00:18:24.804 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:24.804 * Looking for test storage... 00:18:24.804 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:24.804 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:24.804 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:18:24.804 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:24.804 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:24.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.805 --rc genhtml_branch_coverage=1 00:18:24.805 --rc genhtml_function_coverage=1 00:18:24.805 --rc genhtml_legend=1 00:18:24.805 --rc geninfo_all_blocks=1 00:18:24.805 --rc geninfo_unexecuted_blocks=1 00:18:24.805 00:18:24.805 ' 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:24.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.805 --rc genhtml_branch_coverage=1 00:18:24.805 --rc genhtml_function_coverage=1 00:18:24.805 --rc genhtml_legend=1 00:18:24.805 --rc geninfo_all_blocks=1 00:18:24.805 --rc geninfo_unexecuted_blocks=1 00:18:24.805 00:18:24.805 ' 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:24.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.805 --rc genhtml_branch_coverage=1 00:18:24.805 --rc genhtml_function_coverage=1 00:18:24.805 --rc genhtml_legend=1 00:18:24.805 --rc geninfo_all_blocks=1 00:18:24.805 --rc geninfo_unexecuted_blocks=1 00:18:24.805 00:18:24.805 ' 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:24.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:24.805 --rc genhtml_branch_coverage=1 00:18:24.805 --rc genhtml_function_coverage=1 00:18:24.805 --rc genhtml_legend=1 00:18:24.805 --rc geninfo_all_blocks=1 00:18:24.805 --rc geninfo_unexecuted_blocks=1 00:18:24.805 00:18:24.805 ' 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:24.805 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:24.805 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:24.806 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:18:24.806 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:24.806 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:24.806 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:24.806 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:24.806 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:24.806 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:24.806 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:24.806 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:24.806 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:24.806 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:24.806 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:24.806 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:24.806 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:24.806 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:24.806 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:24.806 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:24.806 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:24.806 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:24.806 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:24.806 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:24.806 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:24.806 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:24.806 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:24.806 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:24.806 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:24.806 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:24.806 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:24.806 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:24.806 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:24.806 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:24.806 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:24.806 Cannot find device "nvmf_init_br" 00:18:24.806 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:18:24.806 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:24.806 Cannot find device "nvmf_init_br2" 00:18:24.806 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:18:24.806 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:24.806 Cannot find device "nvmf_tgt_br" 00:18:24.806 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:18:24.806 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:24.806 Cannot find device "nvmf_tgt_br2" 00:18:24.806 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:18:24.806 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:24.806 Cannot find device "nvmf_init_br" 00:18:24.806 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:18:24.806 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:24.806 Cannot find device "nvmf_init_br2" 00:18:24.806 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:18:24.806 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:24.806 Cannot find device "nvmf_tgt_br" 00:18:24.806 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:18:24.806 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:24.806 Cannot find device "nvmf_tgt_br2" 00:18:24.806 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:18:24.806 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:24.806 Cannot find device "nvmf_br" 00:18:24.806 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:18:24.806 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:24.806 Cannot find device "nvmf_init_if" 00:18:24.806 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:18:24.806 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:25.065 Cannot find device "nvmf_init_if2" 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:25.065 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:25.065 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:25.065 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:25.065 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:18:25.065 00:18:25.065 --- 10.0.0.3 ping statistics --- 00:18:25.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:25.065 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:25.065 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:25.065 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:18:25.065 00:18:25.065 --- 10.0.0.4 ping statistics --- 00:18:25.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:25.065 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:25.065 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:25.065 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:18:25.065 00:18:25.065 --- 10.0.0.1 ping statistics --- 00:18:25.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:25.065 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:25.065 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:25.065 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:18:25.065 00:18:25.065 --- 10.0.0.2 ping statistics --- 00:18:25.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:25.065 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:25.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=67656 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 67656 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 67656 ']' 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:25.065 12:17:20 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:25.065 [2024-11-25 12:17:20.927145] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:18:25.065 [2024-11-25 12:17:20.927204] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:25.323 [2024-11-25 12:17:21.066040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:25.323 [2024-11-25 12:17:21.102817] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:25.323 [2024-11-25 12:17:21.102989] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:25.323 [2024-11-25 12:17:21.103053] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:25.323 [2024-11-25 12:17:21.103080] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:25.323 [2024-11-25 12:17:21.103095] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:25.323 [2024-11-25 12:17:21.103916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:25.323 [2024-11-25 12:17:21.104095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:25.323 [2024-11-25 12:17:21.104149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:25.323 [2024-11-25 12:17:21.104153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.253 12:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:26.253 12:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:18:26.253 12:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:26.253 12:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:26.253 12:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:26.253 12:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:26.253 12:17:21 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:26.253 [2024-11-25 12:17:22.017363] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:26.253 12:17:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:26.509 Malloc0 00:18:26.509 12:17:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:18:26.766 12:17:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:27.023 12:17:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:27.023 [2024-11-25 12:17:22.882508] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:27.280 12:17:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:18:27.280 [2024-11-25 12:17:23.082709] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:18:27.280 12:17:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid=5934b511-67a7-4c73-84f0-52fdf36bb79d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:18:27.538 12:17:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid=5934b511-67a7-4c73-84f0-52fdf36bb79d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:18:27.796 12:17:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:18:27.796 12:17:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:18:27.796 12:17:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:27.796 12:17:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:27.796 12:17:23 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:18:29.713 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:29.713 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:29.713 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:29.714 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:29.714 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:29.714 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:18:29.714 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:18:29.714 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:18:29.714 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:18:29.714 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:18:29.714 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:18:29.714 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:18:29.714 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:18:29.714 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:18:29.714 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:18:29.714 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:18:29.714 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:18:29.714 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:18:29.714 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:18:29.714 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:18:29.714 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:18:29.714 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:18:29.714 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:18:29.714 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:18:29.714 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:18:29.714 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:18:29.714 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:18:29.714 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:18:29.714 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:18:29.714 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:18:29.714 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:18:29.714 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:18:29.714 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=67788 00:18:29.714 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:18:29.714 12:17:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:18:29.714 [global] 00:18:29.714 thread=1 00:18:29.714 invalidate=1 00:18:29.714 rw=randrw 00:18:29.714 time_based=1 00:18:29.714 runtime=6 00:18:29.714 ioengine=libaio 00:18:29.714 direct=1 00:18:29.714 bs=4096 00:18:29.714 iodepth=128 00:18:29.714 norandommap=0 00:18:29.714 numjobs=1 00:18:29.714 00:18:29.714 verify_dump=1 00:18:29.714 verify_backlog=512 00:18:29.714 verify_state_save=0 00:18:29.714 do_verify=1 00:18:29.714 verify=crc32c-intel 00:18:29.714 [job0] 00:18:29.714 filename=/dev/nvme0n1 00:18:29.714 Could not set queue depth (nvme0n1) 00:18:29.971 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:29.971 fio-3.35 00:18:29.971 Starting 1 thread 00:18:30.905 12:17:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:18:31.164 12:17:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:18:31.164 12:17:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:18:31.164 12:17:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:18:31.164 12:17:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:18:31.164 12:17:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:18:31.164 12:17:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:18:31.164 12:17:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:18:31.164 12:17:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:18:31.164 12:17:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:18:31.164 12:17:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:18:31.164 12:17:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:18:31.164 12:17:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:18:31.164 12:17:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:18:31.164 12:17:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:18:32.538 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:18:32.538 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:18:32.538 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:18:32.538 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:32.538 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:18:32.796 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:18:32.796 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:18:32.796 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:18:32.796 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:18:32.796 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:18:32.796 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:18:32.796 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:18:32.796 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:18:32.796 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:18:32.796 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:18:32.796 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:18:32.796 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:18:32.796 12:17:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:18:33.731 12:17:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:18:33.731 12:17:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:18:33.731 12:17:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:18:33.731 12:17:29 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 67788 00:18:36.260 00:18:36.260 job0: (groupid=0, jobs=1): err= 0: pid=67809: Mon Nov 25 12:17:31 2024 00:18:36.260 read: IOPS=14.4k, BW=56.2MiB/s (59.0MB/s)(338MiB/6004msec) 00:18:36.260 slat (usec): min=2, max=8040, avg=41.02, stdev=195.75 00:18:36.260 clat (usec): min=689, max=17431, avg=6071.32, stdev=1069.54 00:18:36.260 lat (usec): min=704, max=17438, avg=6112.34, stdev=1080.50 00:18:36.260 clat percentiles (usec): 00:18:36.260 | 1.00th=[ 3654], 5.00th=[ 4686], 10.00th=[ 5080], 20.00th=[ 5407], 00:18:36.260 | 30.00th=[ 5538], 40.00th=[ 5669], 50.00th=[ 5866], 60.00th=[ 6128], 00:18:36.260 | 70.00th=[ 6456], 80.00th=[ 6783], 90.00th=[ 7308], 95.00th=[ 7963], 00:18:36.260 | 99.00th=[ 9110], 99.50th=[10028], 99.90th=[13698], 99.95th=[16057], 00:18:36.260 | 99.99th=[16909] 00:18:36.260 bw ( KiB/s): min=14712, max=37776, per=50.76%, avg=29230.55, stdev=7671.72, samples=11 00:18:36.260 iops : min= 3678, max= 9444, avg=7307.64, stdev=1917.93, samples=11 00:18:36.260 write: IOPS=8634, BW=33.7MiB/s (35.4MB/s)(176MiB/5210msec); 0 zone resets 00:18:36.260 slat (usec): min=8, max=2978, avg=48.76, stdev=132.86 00:18:36.260 clat (usec): min=640, max=16787, avg=5265.69, stdev=975.33 00:18:36.260 lat (usec): min=660, max=16804, avg=5314.45, stdev=981.30 00:18:36.260 clat percentiles (usec): 00:18:36.260 | 1.00th=[ 2802], 5.00th=[ 3949], 10.00th=[ 4424], 20.00th=[ 4686], 00:18:36.260 | 30.00th=[ 4883], 40.00th=[ 5014], 50.00th=[ 5211], 60.00th=[ 5342], 00:18:36.260 | 70.00th=[ 5538], 80.00th=[ 5800], 90.00th=[ 6390], 95.00th=[ 6783], 00:18:36.260 | 99.00th=[ 7832], 99.50th=[ 9110], 99.90th=[13960], 99.95th=[15795], 00:18:36.260 | 99.99th=[16188] 00:18:36.260 bw ( KiB/s): min=15408, max=36952, per=84.79%, avg=29285.09, stdev=7371.65, samples=11 00:18:36.260 iops : min= 3852, max= 9238, avg=7321.27, stdev=1842.91, samples=11 00:18:36.260 lat (usec) : 750=0.01%, 1000=0.01% 00:18:36.260 lat (msec) : 2=0.06%, 4=3.12%, 10=96.38%, 20=0.44% 00:18:36.260 cpu : usr=3.43%, sys=18.97%, ctx=8305, majf=0, minf=108 00:18:36.260 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:18:36.260 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:36.260 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:36.260 issued rwts: total=86429,44986,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:36.260 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:36.260 00:18:36.260 Run status group 0 (all jobs): 00:18:36.260 READ: bw=56.2MiB/s (59.0MB/s), 56.2MiB/s-56.2MiB/s (59.0MB/s-59.0MB/s), io=338MiB (354MB), run=6004-6004msec 00:18:36.260 WRITE: bw=33.7MiB/s (35.4MB/s), 33.7MiB/s-33.7MiB/s (35.4MB/s-35.4MB/s), io=176MiB (184MB), run=5210-5210msec 00:18:36.260 00:18:36.260 Disk stats (read/write): 00:18:36.260 nvme0n1: ios=85244/44029, merge=0/0, ticks=489949/218772, in_queue=708721, util=98.48% 00:18:36.260 12:17:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:18:36.260 12:17:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:18:36.518 12:17:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:18:36.518 12:17:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:18:36.518 12:17:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:18:36.518 12:17:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:18:36.518 12:17:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:18:36.518 12:17:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:18:36.518 12:17:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:18:36.518 12:17:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:18:36.518 12:17:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:18:36.518 12:17:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:18:36.518 12:17:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:18:36.518 12:17:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:18:36.518 12:17:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:18:37.451 12:17:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:18:37.451 12:17:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:18:37.451 12:17:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:18:37.451 12:17:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:18:37.451 12:17:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=67947 00:18:37.451 12:17:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:18:37.451 12:17:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:18:37.451 [global] 00:18:37.451 thread=1 00:18:37.451 invalidate=1 00:18:37.451 rw=randrw 00:18:37.451 time_based=1 00:18:37.451 runtime=6 00:18:37.451 ioengine=libaio 00:18:37.451 direct=1 00:18:37.451 bs=4096 00:18:37.451 iodepth=128 00:18:37.451 norandommap=0 00:18:37.451 numjobs=1 00:18:37.451 00:18:37.451 verify_dump=1 00:18:37.451 verify_backlog=512 00:18:37.451 verify_state_save=0 00:18:37.451 do_verify=1 00:18:37.451 verify=crc32c-intel 00:18:37.451 [job0] 00:18:37.451 filename=/dev/nvme0n1 00:18:37.451 Could not set queue depth (nvme0n1) 00:18:37.709 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:37.709 fio-3.35 00:18:37.709 Starting 1 thread 00:18:38.637 12:17:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:18:38.637 12:17:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:18:38.894 12:17:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:18:38.894 12:17:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:18:38.894 12:17:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:18:38.894 12:17:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:18:38.894 12:17:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:18:38.894 12:17:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:18:38.894 12:17:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:18:38.894 12:17:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:18:38.894 12:17:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:18:38.894 12:17:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:18:38.894 12:17:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:18:38.894 12:17:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:18:38.894 12:17:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:18:40.269 12:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:18:40.269 12:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:18:40.269 12:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:18:40.269 12:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:40.269 12:17:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:18:40.269 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:18:40.269 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:18:40.269 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:18:40.269 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:18:40.269 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:18:40.269 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:18:40.269 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:18:40.269 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:18:40.269 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:18:40.269 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:18:40.269 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:18:40.269 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:18:40.269 12:17:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:18:41.640 12:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:18:41.640 12:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:18:41.640 12:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:18:41.640 12:17:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 67947 00:18:44.164 00:18:44.164 job0: (groupid=0, jobs=1): err= 0: pid=67968: Mon Nov 25 12:17:39 2024 00:18:44.164 read: IOPS=17.2k, BW=67.3MiB/s (70.6MB/s)(404MiB/6005msec) 00:18:44.164 slat (nsec): min=1703, max=4273.9k, avg=28729.25, stdev=151779.07 00:18:44.164 clat (usec): min=198, max=10117, avg=5095.52, stdev=1275.66 00:18:44.164 lat (usec): min=206, max=10122, avg=5124.25, stdev=1290.98 00:18:44.164 clat percentiles (usec): 00:18:44.164 | 1.00th=[ 2180], 5.00th=[ 2966], 10.00th=[ 3326], 20.00th=[ 3752], 00:18:44.164 | 30.00th=[ 4359], 40.00th=[ 5145], 50.00th=[ 5407], 60.00th=[ 5538], 00:18:44.164 | 70.00th=[ 5735], 80.00th=[ 6063], 90.00th=[ 6456], 95.00th=[ 6915], 00:18:44.164 | 99.00th=[ 8291], 99.50th=[ 8586], 99.90th=[ 9110], 99.95th=[ 9372], 00:18:44.164 | 99.99th=[ 9634] 00:18:44.164 bw ( KiB/s): min=27304, max=55392, per=52.69%, avg=36329.73, stdev=8057.48, samples=11 00:18:44.164 iops : min= 6826, max=13848, avg=9082.36, stdev=2014.42, samples=11 00:18:44.164 write: IOPS=9962, BW=38.9MiB/s (40.8MB/s)(208MiB/5333msec); 0 zone resets 00:18:44.164 slat (usec): min=7, max=1806, avg=38.36, stdev=107.25 00:18:44.164 clat (usec): min=953, max=9501, avg=4420.33, stdev=1084.46 00:18:44.164 lat (usec): min=986, max=9521, avg=4458.69, stdev=1096.62 00:18:44.164 clat percentiles (usec): 00:18:44.164 | 1.00th=[ 2089], 5.00th=[ 2507], 10.00th=[ 2737], 20.00th=[ 3163], 00:18:44.164 | 30.00th=[ 4080], 40.00th=[ 4555], 50.00th=[ 4752], 60.00th=[ 4948], 00:18:44.164 | 70.00th=[ 5080], 80.00th=[ 5276], 90.00th=[ 5473], 95.00th=[ 5669], 00:18:44.164 | 99.00th=[ 6652], 99.50th=[ 7373], 99.90th=[ 8717], 99.95th=[ 8979], 00:18:44.164 | 99.99th=[ 9241] 00:18:44.164 bw ( KiB/s): min=27280, max=55784, per=91.04%, avg=36280.27, stdev=8082.02, samples=11 00:18:44.164 iops : min= 6820, max=13946, avg=9070.00, stdev=2020.56, samples=11 00:18:44.164 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:18:44.164 lat (msec) : 2=0.76%, 4=25.53%, 10=73.69%, 20=0.01% 00:18:44.164 cpu : usr=4.18%, sys=19.35%, ctx=9734, majf=0, minf=90 00:18:44.164 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:18:44.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:44.165 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:44.165 issued rwts: total=103509,53131,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:44.165 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:44.165 00:18:44.165 Run status group 0 (all jobs): 00:18:44.165 READ: bw=67.3MiB/s (70.6MB/s), 67.3MiB/s-67.3MiB/s (70.6MB/s-70.6MB/s), io=404MiB (424MB), run=6005-6005msec 00:18:44.165 WRITE: bw=38.9MiB/s (40.8MB/s), 38.9MiB/s-38.9MiB/s (40.8MB/s-40.8MB/s), io=208MiB (218MB), run=5333-5333msec 00:18:44.165 00:18:44.165 Disk stats (read/write): 00:18:44.165 nvme0n1: ios=102494/52144, merge=0/0, ticks=495057/217040, in_queue=712097, util=98.55% 00:18:44.165 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:44.165 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:44.165 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:44.165 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:18:44.165 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:44.165 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:44.165 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:44.165 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:44.165 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:18:44.165 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:44.165 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:18:44.165 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:18:44.165 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:18:44.165 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:18:44.165 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:44.165 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:18:44.165 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:44.165 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:18:44.165 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:44.165 12:17:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:44.165 rmmod nvme_tcp 00:18:44.165 rmmod nvme_fabrics 00:18:44.165 rmmod nvme_keyring 00:18:44.423 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:44.423 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:18:44.423 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:18:44.423 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 67656 ']' 00:18:44.423 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 67656 00:18:44.423 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 67656 ']' 00:18:44.423 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 67656 00:18:44.423 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:18:44.423 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:44.423 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67656 00:18:44.423 killing process with pid 67656 00:18:44.423 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:44.423 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:44.423 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67656' 00:18:44.423 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 67656 00:18:44.423 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 67656 00:18:44.423 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:44.423 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:44.423 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:44.423 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:18:44.423 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:18:44.423 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:18:44.423 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:44.423 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:44.423 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:44.423 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:44.423 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:44.423 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:44.423 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:44.423 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:44.423 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:44.423 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:44.423 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:44.423 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:44.681 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:44.681 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:44.681 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:44.681 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:44.681 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:44.681 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.681 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:44.681 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.681 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:18:44.681 00:18:44.681 real 0m19.986s 00:18:44.681 user 1m18.253s 00:18:44.681 sys 0m4.864s 00:18:44.681 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:44.681 ************************************ 00:18:44.681 END TEST nvmf_target_multipath 00:18:44.681 ************************************ 00:18:44.681 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:44.681 12:17:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:44.681 12:17:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:44.681 12:17:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:44.681 12:17:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:18:44.681 ************************************ 00:18:44.681 START TEST nvmf_zcopy 00:18:44.681 ************************************ 00:18:44.681 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:44.681 * Looking for test storage... 00:18:44.681 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:44.681 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:44.681 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:18:44.681 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:44.940 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:44.940 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:44.940 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:44.940 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:44.940 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:18:44.940 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:18:44.940 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:18:44.940 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:18:44.940 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:18:44.940 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:18:44.940 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:18:44.940 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:44.940 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:18:44.940 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:18:44.940 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:44.940 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:44.940 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:18:44.940 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:18:44.940 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:44.940 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:18:44.940 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:18:44.940 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:18:44.940 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:18:44.940 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:44.940 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:18:44.940 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:18:44.940 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:44.940 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:44.940 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:18:44.940 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:44.940 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:44.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.940 --rc genhtml_branch_coverage=1 00:18:44.940 --rc genhtml_function_coverage=1 00:18:44.940 --rc genhtml_legend=1 00:18:44.940 --rc geninfo_all_blocks=1 00:18:44.940 --rc geninfo_unexecuted_blocks=1 00:18:44.940 00:18:44.940 ' 00:18:44.940 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:44.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.940 --rc genhtml_branch_coverage=1 00:18:44.940 --rc genhtml_function_coverage=1 00:18:44.940 --rc genhtml_legend=1 00:18:44.940 --rc geninfo_all_blocks=1 00:18:44.940 --rc geninfo_unexecuted_blocks=1 00:18:44.940 00:18:44.940 ' 00:18:44.940 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:44.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.940 --rc genhtml_branch_coverage=1 00:18:44.940 --rc genhtml_function_coverage=1 00:18:44.940 --rc genhtml_legend=1 00:18:44.940 --rc geninfo_all_blocks=1 00:18:44.940 --rc geninfo_unexecuted_blocks=1 00:18:44.940 00:18:44.940 ' 00:18:44.940 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:44.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.940 --rc genhtml_branch_coverage=1 00:18:44.940 --rc genhtml_function_coverage=1 00:18:44.940 --rc genhtml_legend=1 00:18:44.940 --rc geninfo_all_blocks=1 00:18:44.940 --rc geninfo_unexecuted_blocks=1 00:18:44.940 00:18:44.940 ' 00:18:44.940 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:44.940 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:18:44.940 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:44.940 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:44.940 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:44.940 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:44.941 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:44.941 Cannot find device "nvmf_init_br" 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:44.941 Cannot find device "nvmf_init_br2" 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:18:44.941 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:44.941 Cannot find device "nvmf_tgt_br" 00:18:44.942 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:18:44.942 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:44.942 Cannot find device "nvmf_tgt_br2" 00:18:44.942 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:18:44.942 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:44.942 Cannot find device "nvmf_init_br" 00:18:44.942 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:18:44.942 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:44.942 Cannot find device "nvmf_init_br2" 00:18:44.942 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:18:44.942 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:44.942 Cannot find device "nvmf_tgt_br" 00:18:44.942 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:18:44.942 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:44.942 Cannot find device "nvmf_tgt_br2" 00:18:44.942 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:18:44.942 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:44.942 Cannot find device "nvmf_br" 00:18:44.942 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:18:44.942 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:44.942 Cannot find device "nvmf_init_if" 00:18:44.942 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:18:44.942 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:44.942 Cannot find device "nvmf_init_if2" 00:18:44.942 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:18:44.942 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:44.942 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:44.942 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:18:44.942 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:44.942 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:44.942 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:18:44.942 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:44.942 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:44.942 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:44.942 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:44.942 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:44.942 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:44.942 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:44.942 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:44.942 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:44.942 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:44.942 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:44.942 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:44.942 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:44.942 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:44.942 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:45.200 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:45.200 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:45.200 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:45.200 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:45.200 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:45.200 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:45.200 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:45.200 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:45.200 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:45.200 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:45.200 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:45.200 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:45.200 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:45.200 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:45.200 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:45.200 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:45.200 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:45.200 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:45.200 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:45.200 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:18:45.200 00:18:45.200 --- 10.0.0.3 ping statistics --- 00:18:45.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.200 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:18:45.200 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:45.200 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:45.200 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:18:45.200 00:18:45.200 --- 10.0.0.4 ping statistics --- 00:18:45.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.200 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:18:45.200 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:45.200 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:45.200 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.016 ms 00:18:45.200 00:18:45.200 --- 10.0.0.1 ping statistics --- 00:18:45.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.200 rtt min/avg/max/mdev = 0.016/0.016/0.016/0.000 ms 00:18:45.200 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:45.200 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:45.200 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:18:45.200 00:18:45.200 --- 10.0.0.2 ping statistics --- 00:18:45.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.200 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:18:45.200 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:45.200 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:18:45.200 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:45.200 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:45.200 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:45.200 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:45.200 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:45.200 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:45.200 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:45.201 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:45.201 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:45.201 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:45.201 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:45.201 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=68297 00:18:45.201 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 68297 00:18:45.201 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 68297 ']' 00:18:45.201 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:45.201 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.201 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:45.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.201 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.201 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:45.201 12:17:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:45.201 [2024-11-25 12:17:40.952252] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:18:45.201 [2024-11-25 12:17:40.952309] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:45.458 [2024-11-25 12:17:41.088448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.459 [2024-11-25 12:17:41.122926] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:45.459 [2024-11-25 12:17:41.123074] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:45.459 [2024-11-25 12:17:41.123152] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:45.459 [2024-11-25 12:17:41.123179] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:45.459 [2024-11-25 12:17:41.123194] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:45.459 [2024-11-25 12:17:41.123496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:46.025 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:46.026 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:18:46.026 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:46.026 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:46.026 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:46.026 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:46.026 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:46.026 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:46.026 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.026 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:46.026 [2024-11-25 12:17:41.864453] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:46.026 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.026 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:46.026 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.026 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:46.026 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.026 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:46.026 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.026 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:46.026 [2024-11-25 12:17:41.880518] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:46.026 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.026 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:18:46.026 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.026 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:46.026 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.026 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:46.026 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.026 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:46.284 malloc0 00:18:46.284 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.284 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:46.284 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.284 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:46.284 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.284 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:46.284 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:46.284 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:18:46.284 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:18:46.284 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:46.284 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:46.284 { 00:18:46.284 "params": { 00:18:46.284 "name": "Nvme$subsystem", 00:18:46.284 "trtype": "$TEST_TRANSPORT", 00:18:46.284 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:46.284 "adrfam": "ipv4", 00:18:46.284 "trsvcid": "$NVMF_PORT", 00:18:46.284 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:46.284 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:46.284 "hdgst": ${hdgst:-false}, 00:18:46.284 "ddgst": ${ddgst:-false} 00:18:46.284 }, 00:18:46.284 "method": "bdev_nvme_attach_controller" 00:18:46.284 } 00:18:46.284 EOF 00:18:46.284 )") 00:18:46.284 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:18:46.284 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:18:46.284 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:18:46.284 12:17:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:46.284 "params": { 00:18:46.284 "name": "Nvme1", 00:18:46.284 "trtype": "tcp", 00:18:46.284 "traddr": "10.0.0.3", 00:18:46.284 "adrfam": "ipv4", 00:18:46.284 "trsvcid": "4420", 00:18:46.284 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:46.284 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:46.284 "hdgst": false, 00:18:46.284 "ddgst": false 00:18:46.284 }, 00:18:46.284 "method": "bdev_nvme_attach_controller" 00:18:46.284 }' 00:18:46.284 [2024-11-25 12:17:41.950892] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:18:46.284 [2024-11-25 12:17:41.950943] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68348 ] 00:18:46.284 [2024-11-25 12:17:42.089461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.284 [2024-11-25 12:17:42.125417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.542 Running I/O for 10 seconds... 00:18:48.407 6985.00 IOPS, 54.57 MiB/s [2024-11-25T12:17:45.657Z] 7010.00 IOPS, 54.77 MiB/s [2024-11-25T12:17:46.589Z] 7088.67 IOPS, 55.38 MiB/s [2024-11-25T12:17:47.522Z] 7492.75 IOPS, 58.54 MiB/s [2024-11-25T12:17:48.456Z] 7734.20 IOPS, 60.42 MiB/s [2024-11-25T12:17:49.388Z] 7892.33 IOPS, 61.66 MiB/s [2024-11-25T12:17:50.320Z] 8023.00 IOPS, 62.68 MiB/s [2024-11-25T12:17:51.693Z] 8145.75 IOPS, 63.64 MiB/s [2024-11-25T12:17:52.308Z] 8254.67 IOPS, 64.49 MiB/s [2024-11-25T12:17:52.308Z] 8337.60 IOPS, 65.14 MiB/s 00:18:56.432 Latency(us) 00:18:56.432 [2024-11-25T12:17:52.308Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:56.432 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:56.432 Verification LBA range: start 0x0 length 0x1000 00:18:56.432 Nvme1n1 : 10.01 8342.50 65.18 0.00 0.00 15299.31 324.53 27021.00 00:18:56.432 [2024-11-25T12:17:52.308Z] =================================================================================================================== 00:18:56.432 [2024-11-25T12:17:52.308Z] Total : 8342.50 65.18 0.00 0.00 15299.31 324.53 27021.00 00:18:56.690 12:17:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=68466 00:18:56.690 12:17:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:18:56.690 12:17:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:56.690 12:17:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:56.690 12:17:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:56.690 12:17:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:18:56.690 12:17:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:18:56.690 12:17:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:56.690 12:17:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:56.690 { 00:18:56.690 "params": { 00:18:56.690 "name": "Nvme$subsystem", 00:18:56.690 "trtype": "$TEST_TRANSPORT", 00:18:56.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:56.690 "adrfam": "ipv4", 00:18:56.690 "trsvcid": "$NVMF_PORT", 00:18:56.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:56.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:56.690 "hdgst": ${hdgst:-false}, 00:18:56.690 "ddgst": ${ddgst:-false} 00:18:56.690 }, 00:18:56.690 "method": "bdev_nvme_attach_controller" 00:18:56.690 } 00:18:56.690 EOF 00:18:56.690 )") 00:18:56.690 12:17:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:18:56.691 12:17:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:18:56.691 [2024-11-25 12:17:52.387865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.691 [2024-11-25 12:17:52.387892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.691 12:17:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:18:56.691 12:17:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:56.691 "params": { 00:18:56.691 "name": "Nvme1", 00:18:56.691 "trtype": "tcp", 00:18:56.691 "traddr": "10.0.0.3", 00:18:56.691 "adrfam": "ipv4", 00:18:56.691 "trsvcid": "4420", 00:18:56.691 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:56.691 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:56.691 "hdgst": false, 00:18:56.691 "ddgst": false 00:18:56.691 }, 00:18:56.691 "method": "bdev_nvme_attach_controller" 00:18:56.691 }' 00:18:56.691 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:56.691 [2024-11-25 12:17:52.395843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.691 [2024-11-25 12:17:52.395855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.691 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:56.691 [2024-11-25 12:17:52.403842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.691 [2024-11-25 12:17:52.403853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.691 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:56.691 [2024-11-25 12:17:52.411843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.691 [2024-11-25 12:17:52.411854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.691 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:56.691 [2024-11-25 12:17:52.419844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.691 [2024-11-25 12:17:52.419854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.691 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:56.691 [2024-11-25 12:17:52.425077] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:18:56.691 [2024-11-25 12:17:52.425175] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68466 ] 00:18:56.691 [2024-11-25 12:17:52.427845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.691 [2024-11-25 12:17:52.427856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.691 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:56.691 [2024-11-25 12:17:52.435852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.691 [2024-11-25 12:17:52.435867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.691 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:56.691 [2024-11-25 12:17:52.443852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.691 [2024-11-25 12:17:52.443864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.691 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:56.691 [2024-11-25 12:17:52.451855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.691 [2024-11-25 12:17:52.451866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.691 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:56.691 [2024-11-25 12:17:52.459856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.691 [2024-11-25 12:17:52.459867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.691 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:56.691 [2024-11-25 12:17:52.467860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.691 [2024-11-25 12:17:52.467872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.691 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:56.691 [2024-11-25 12:17:52.475862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.691 [2024-11-25 12:17:52.475875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.691 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:56.691 [2024-11-25 12:17:52.483865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.691 [2024-11-25 12:17:52.483877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.691 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:56.691 [2024-11-25 12:17:52.491866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.691 [2024-11-25 12:17:52.491877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.691 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:56.691 [2024-11-25 12:17:52.499867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.691 [2024-11-25 12:17:52.499879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.691 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:56.691 [2024-11-25 12:17:52.507868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.691 [2024-11-25 12:17:52.507880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.691 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:56.691 [2024-11-25 12:17:52.515869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.691 [2024-11-25 12:17:52.515881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.691 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:56.691 [2024-11-25 12:17:52.523871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.691 [2024-11-25 12:17:52.523882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.691 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:56.691 [2024-11-25 12:17:52.531874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.691 [2024-11-25 12:17:52.531885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.691 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:56.691 [2024-11-25 12:17:52.539875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.691 [2024-11-25 12:17:52.539887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.691 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:56.691 [2024-11-25 12:17:52.547876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.691 [2024-11-25 12:17:52.547887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.691 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:56.691 [2024-11-25 12:17:52.555878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.691 [2024-11-25 12:17:52.555889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.691 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:56.691 [2024-11-25 12:17:52.563879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.691 [2024-11-25 12:17:52.563890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.950 [2024-11-25 12:17:52.567295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.950 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:56.950 [2024-11-25 12:17:52.571882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.950 [2024-11-25 12:17:52.571899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.950 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:56.950 [2024-11-25 12:17:52.579882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.950 [2024-11-25 12:17:52.579895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.950 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:56.950 [2024-11-25 12:17:52.587882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.950 [2024-11-25 12:17:52.587893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.950 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:56.950 [2024-11-25 12:17:52.595886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.950 [2024-11-25 12:17:52.595900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.950 [2024-11-25 12:17:52.597719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.950 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:56.950 [2024-11-25 12:17:52.603890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.950 [2024-11-25 12:17:52.603904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.950 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:56.950 [2024-11-25 12:17:52.611894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.950 [2024-11-25 12:17:52.611910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.950 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:56.950 [2024-11-25 12:17:52.619893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.950 [2024-11-25 12:17:52.619905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.950 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:56.951 [2024-11-25 12:17:52.627892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.951 [2024-11-25 12:17:52.627905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.951 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:56.951 [2024-11-25 12:17:52.635895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.951 [2024-11-25 12:17:52.635907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.951 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:56.951 [2024-11-25 12:17:52.643897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.951 [2024-11-25 12:17:52.643909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.951 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:56.951 [2024-11-25 12:17:52.651898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.951 [2024-11-25 12:17:52.651911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.951 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:56.951 [2024-11-25 12:17:52.659895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.951 [2024-11-25 12:17:52.659907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.951 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:56.951 [2024-11-25 12:17:52.667910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.951 [2024-11-25 12:17:52.667927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.951 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:56.951 [2024-11-25 12:17:52.675905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.951 [2024-11-25 12:17:52.675919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.951 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:56.951 [2024-11-25 12:17:52.683905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.951 [2024-11-25 12:17:52.683919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.951 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:56.951 [2024-11-25 12:17:52.691907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.951 [2024-11-25 12:17:52.691921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.951 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:56.951 [2024-11-25 12:17:52.699909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.951 [2024-11-25 12:17:52.699922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.951 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:56.951 [2024-11-25 12:17:52.707913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.951 [2024-11-25 12:17:52.707926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.951 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:56.951 [2024-11-25 12:17:52.715912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.951 [2024-11-25 12:17:52.715926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.951 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:56.951 [2024-11-25 12:17:52.723913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.951 [2024-11-25 12:17:52.723924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.951 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:56.951 [2024-11-25 12:17:52.731919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.951 [2024-11-25 12:17:52.731935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.951 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:56.951 Running I/O for 5 seconds... 00:18:56.951 [2024-11-25 12:17:52.743649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.951 [2024-11-25 12:17:52.743667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.951 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:56.951 [2024-11-25 12:17:52.752367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.951 [2024-11-25 12:17:52.752386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.951 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:56.951 [2024-11-25 12:17:52.761455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.951 [2024-11-25 12:17:52.761472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.951 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:56.951 [2024-11-25 12:17:52.770603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.951 [2024-11-25 12:17:52.770622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.951 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:56.951 [2024-11-25 12:17:52.780027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.951 [2024-11-25 12:17:52.780045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.951 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:56.951 [2024-11-25 12:17:52.788877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.951 [2024-11-25 12:17:52.788894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.951 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:56.951 [2024-11-25 12:17:52.798056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.951 [2024-11-25 12:17:52.798077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.951 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:56.951 [2024-11-25 12:17:52.807405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.951 [2024-11-25 12:17:52.807424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.951 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:56.951 [2024-11-25 12:17:52.816530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:56.951 [2024-11-25 12:17:52.816546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.951 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.210 [2024-11-25 12:17:52.825600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.210 [2024-11-25 12:17:52.825618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.210 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.210 [2024-11-25 12:17:52.835111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.210 [2024-11-25 12:17:52.835129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.210 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.210 [2024-11-25 12:17:52.844813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.210 [2024-11-25 12:17:52.844831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.210 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.210 [2024-11-25 12:17:52.853822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.210 [2024-11-25 12:17:52.853853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.210 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.210 [2024-11-25 12:17:52.863184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.210 [2024-11-25 12:17:52.863201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.210 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.210 [2024-11-25 12:17:52.872064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.210 [2024-11-25 12:17:52.872082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.210 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.210 [2024-11-25 12:17:52.880985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.210 [2024-11-25 12:17:52.881003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.210 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.210 [2024-11-25 12:17:52.890365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.210 [2024-11-25 12:17:52.890382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.210 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.210 [2024-11-25 12:17:52.899099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.210 [2024-11-25 12:17:52.899117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.210 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.210 [2024-11-25 12:17:52.908783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.211 [2024-11-25 12:17:52.908801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.211 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.211 [2024-11-25 12:17:52.917979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.211 [2024-11-25 12:17:52.917997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.211 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.211 [2024-11-25 12:17:52.927001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.211 [2024-11-25 12:17:52.927019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.211 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.211 [2024-11-25 12:17:52.936161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.211 [2024-11-25 12:17:52.936179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.211 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.211 [2024-11-25 12:17:52.945252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.211 [2024-11-25 12:17:52.945269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.211 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.211 [2024-11-25 12:17:52.954773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.211 [2024-11-25 12:17:52.954791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.211 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.211 [2024-11-25 12:17:52.963893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.211 [2024-11-25 12:17:52.963910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.211 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.211 [2024-11-25 12:17:52.973133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.211 [2024-11-25 12:17:52.973150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.211 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.211 [2024-11-25 12:17:52.982272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.211 [2024-11-25 12:17:52.982291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.211 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.211 [2024-11-25 12:17:52.991435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.211 [2024-11-25 12:17:52.991453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.211 2024/11/25 12:17:52 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.211 [2024-11-25 12:17:53.000408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.211 [2024-11-25 12:17:53.000425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.211 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.211 [2024-11-25 12:17:53.009625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.211 [2024-11-25 12:17:53.009643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.211 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.211 [2024-11-25 12:17:53.019142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.211 [2024-11-25 12:17:53.019160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.211 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.211 [2024-11-25 12:17:53.027705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.211 [2024-11-25 12:17:53.027723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.211 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.211 [2024-11-25 12:17:53.037066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.211 [2024-11-25 12:17:53.037083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.211 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.211 [2024-11-25 12:17:53.045982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.211 [2024-11-25 12:17:53.045998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.211 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.211 [2024-11-25 12:17:53.055095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.211 [2024-11-25 12:17:53.055112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.211 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.211 [2024-11-25 12:17:53.064420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.211 [2024-11-25 12:17:53.064440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.211 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.211 [2024-11-25 12:17:53.073549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.211 [2024-11-25 12:17:53.073566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.211 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.211 [2024-11-25 12:17:53.082521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.211 [2024-11-25 12:17:53.082539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.470 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.470 [2024-11-25 12:17:53.091280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.470 [2024-11-25 12:17:53.091298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.470 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.470 [2024-11-25 12:17:53.100978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.470 [2024-11-25 12:17:53.100995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.470 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.470 [2024-11-25 12:17:53.109956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.470 [2024-11-25 12:17:53.109977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.470 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.470 [2024-11-25 12:17:53.118869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.470 [2024-11-25 12:17:53.118886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.470 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.470 [2024-11-25 12:17:53.128366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.470 [2024-11-25 12:17:53.128384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.470 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.470 [2024-11-25 12:17:53.137298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.470 [2024-11-25 12:17:53.137317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.470 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.470 [2024-11-25 12:17:53.146190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.470 [2024-11-25 12:17:53.146207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.470 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.470 [2024-11-25 12:17:53.155427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.470 [2024-11-25 12:17:53.155444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.470 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.470 [2024-11-25 12:17:53.164382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.470 [2024-11-25 12:17:53.164399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.470 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.470 [2024-11-25 12:17:53.173958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.470 [2024-11-25 12:17:53.173975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.470 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.470 [2024-11-25 12:17:53.182992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.470 [2024-11-25 12:17:53.183010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.470 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.470 [2024-11-25 12:17:53.192701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.470 [2024-11-25 12:17:53.192719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.470 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.470 [2024-11-25 12:17:53.202080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.470 [2024-11-25 12:17:53.202097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.470 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.470 [2024-11-25 12:17:53.211289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.470 [2024-11-25 12:17:53.211309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.471 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.471 [2024-11-25 12:17:53.220388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.471 [2024-11-25 12:17:53.220408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.471 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.471 [2024-11-25 12:17:53.228882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.471 [2024-11-25 12:17:53.228901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.471 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.471 [2024-11-25 12:17:53.238122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.471 [2024-11-25 12:17:53.238141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.471 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.471 [2024-11-25 12:17:53.247520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.471 [2024-11-25 12:17:53.247540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.471 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.471 [2024-11-25 12:17:53.256885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.471 [2024-11-25 12:17:53.256904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.471 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.471 [2024-11-25 12:17:53.265783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.471 [2024-11-25 12:17:53.265803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.471 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.471 [2024-11-25 12:17:53.274713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.471 [2024-11-25 12:17:53.274734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.471 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.471 [2024-11-25 12:17:53.283688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.471 [2024-11-25 12:17:53.283706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.471 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.471 [2024-11-25 12:17:53.292458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.471 [2024-11-25 12:17:53.292476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.471 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.471 [2024-11-25 12:17:53.302110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.471 [2024-11-25 12:17:53.302129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.471 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.471 [2024-11-25 12:17:53.311666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.471 [2024-11-25 12:17:53.311686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.471 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.471 [2024-11-25 12:17:53.320532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.471 [2024-11-25 12:17:53.320555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.471 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.471 [2024-11-25 12:17:53.329829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.471 [2024-11-25 12:17:53.329849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.471 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.471 [2024-11-25 12:17:53.338219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.471 [2024-11-25 12:17:53.338238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.471 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.729 [2024-11-25 12:17:53.347467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.730 [2024-11-25 12:17:53.347486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.730 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.730 [2024-11-25 12:17:53.356281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.730 [2024-11-25 12:17:53.356304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.730 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.730 [2024-11-25 12:17:53.365183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.730 [2024-11-25 12:17:53.365202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.730 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.730 [2024-11-25 12:17:53.374457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.730 [2024-11-25 12:17:53.374477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.730 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.730 [2024-11-25 12:17:53.383352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.730 [2024-11-25 12:17:53.383371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.730 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.730 [2024-11-25 12:17:53.392862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.730 [2024-11-25 12:17:53.392879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.730 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.730 [2024-11-25 12:17:53.401706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.730 [2024-11-25 12:17:53.401724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.730 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.730 [2024-11-25 12:17:53.411223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.730 [2024-11-25 12:17:53.411240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.730 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.730 [2024-11-25 12:17:53.420255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.730 [2024-11-25 12:17:53.420272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.730 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.730 [2024-11-25 12:17:53.430230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.730 [2024-11-25 12:17:53.430248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.730 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.730 [2024-11-25 12:17:53.440024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.730 [2024-11-25 12:17:53.440042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.730 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.730 [2024-11-25 12:17:53.449105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.730 [2024-11-25 12:17:53.449123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.730 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.730 [2024-11-25 12:17:53.458499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.730 [2024-11-25 12:17:53.458517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.730 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.730 [2024-11-25 12:17:53.468111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.730 [2024-11-25 12:17:53.468129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.730 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.730 [2024-11-25 12:17:53.477770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.730 [2024-11-25 12:17:53.477798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.730 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.730 [2024-11-25 12:17:53.487051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.730 [2024-11-25 12:17:53.487069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.730 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.730 [2024-11-25 12:17:53.496402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.730 [2024-11-25 12:17:53.496419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.730 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.730 [2024-11-25 12:17:53.505506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.730 [2024-11-25 12:17:53.505524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.730 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.730 [2024-11-25 12:17:53.515187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.730 [2024-11-25 12:17:53.515205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.730 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.730 [2024-11-25 12:17:53.524072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.730 [2024-11-25 12:17:53.524089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.730 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.730 [2024-11-25 12:17:53.533099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.730 [2024-11-25 12:17:53.533117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.730 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.730 [2024-11-25 12:17:53.542113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.730 [2024-11-25 12:17:53.542132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.730 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.730 [2024-11-25 12:17:53.551707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.730 [2024-11-25 12:17:53.551726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.731 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.731 [2024-11-25 12:17:53.560938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.731 [2024-11-25 12:17:53.560955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.731 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.731 [2024-11-25 12:17:53.570186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.731 [2024-11-25 12:17:53.570204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.731 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.731 [2024-11-25 12:17:53.579183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.731 [2024-11-25 12:17:53.579202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.731 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.731 [2024-11-25 12:17:53.588968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.731 [2024-11-25 12:17:53.588986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.731 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.731 [2024-11-25 12:17:53.598239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.731 [2024-11-25 12:17:53.598257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.731 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.989 [2024-11-25 12:17:53.607677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.989 [2024-11-25 12:17:53.607695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.989 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.989 [2024-11-25 12:17:53.616123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.989 [2024-11-25 12:17:53.616140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.989 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.989 [2024-11-25 12:17:53.625507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.989 [2024-11-25 12:17:53.625525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.989 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.989 [2024-11-25 12:17:53.634774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.989 [2024-11-25 12:17:53.634791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.989 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.989 [2024-11-25 12:17:53.643908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.989 [2024-11-25 12:17:53.643927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.989 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.989 [2024-11-25 12:17:53.652405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.989 [2024-11-25 12:17:53.652422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.989 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.989 [2024-11-25 12:17:53.662004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.989 [2024-11-25 12:17:53.662021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.989 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.989 [2024-11-25 12:17:53.671189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.989 [2024-11-25 12:17:53.671207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.989 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.989 [2024-11-25 12:17:53.680286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.989 [2024-11-25 12:17:53.680304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.989 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.989 [2024-11-25 12:17:53.689728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.989 [2024-11-25 12:17:53.689755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.989 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.989 [2024-11-25 12:17:53.698784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.990 [2024-11-25 12:17:53.698801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.990 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.990 [2024-11-25 12:17:53.708149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.990 [2024-11-25 12:17:53.708167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.990 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.990 [2024-11-25 12:17:53.717395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.990 [2024-11-25 12:17:53.717412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.990 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.990 [2024-11-25 12:17:53.726410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.990 [2024-11-25 12:17:53.726428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.990 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.990 [2024-11-25 12:17:53.736079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.990 [2024-11-25 12:17:53.736098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.990 17712.00 IOPS, 138.38 MiB/s [2024-11-25T12:17:53.866Z] 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.990 [2024-11-25 12:17:53.745114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.990 [2024-11-25 12:17:53.745132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.990 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.990 [2024-11-25 12:17:53.754240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.990 [2024-11-25 12:17:53.754259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.990 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.990 [2024-11-25 12:17:53.763655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.990 [2024-11-25 12:17:53.763675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.990 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.990 [2024-11-25 12:17:53.772950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.990 [2024-11-25 12:17:53.772968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.990 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.990 [2024-11-25 12:17:53.782353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.990 [2024-11-25 12:17:53.782372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.990 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.990 [2024-11-25 12:17:53.791674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.990 [2024-11-25 12:17:53.791694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.990 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.990 [2024-11-25 12:17:53.801322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.990 [2024-11-25 12:17:53.801340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.990 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.990 [2024-11-25 12:17:53.810550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.990 [2024-11-25 12:17:53.810569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.990 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.990 [2024-11-25 12:17:53.819986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.990 [2024-11-25 12:17:53.820005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.990 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.990 [2024-11-25 12:17:53.828581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.990 [2024-11-25 12:17:53.828599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.990 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.990 [2024-11-25 12:17:53.838289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.990 [2024-11-25 12:17:53.838306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.990 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.990 [2024-11-25 12:17:53.847469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.990 [2024-11-25 12:17:53.847487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.990 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:57.990 [2024-11-25 12:17:53.856625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:57.990 [2024-11-25 12:17:53.856645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:57.990 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.356 [2024-11-25 12:17:53.865797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.356 [2024-11-25 12:17:53.865815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.356 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.356 [2024-11-25 12:17:53.874384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.356 [2024-11-25 12:17:53.874401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.356 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.356 [2024-11-25 12:17:53.883906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.356 [2024-11-25 12:17:53.883924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.356 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.356 [2024-11-25 12:17:53.893215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.356 [2024-11-25 12:17:53.893233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.356 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.356 [2024-11-25 12:17:53.902357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.356 [2024-11-25 12:17:53.902377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.356 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.357 [2024-11-25 12:17:53.911478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.357 [2024-11-25 12:17:53.911497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.357 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.357 [2024-11-25 12:17:53.921190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.357 [2024-11-25 12:17:53.921207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.357 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.357 [2024-11-25 12:17:53.930355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.357 [2024-11-25 12:17:53.930374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.357 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.357 [2024-11-25 12:17:53.939514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.357 [2024-11-25 12:17:53.939535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.357 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.357 [2024-11-25 12:17:53.948655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.357 [2024-11-25 12:17:53.948675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.357 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.357 [2024-11-25 12:17:53.958229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.357 [2024-11-25 12:17:53.958247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.357 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.357 [2024-11-25 12:17:53.967270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.357 [2024-11-25 12:17:53.967289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.357 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.357 [2024-11-25 12:17:53.976550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.357 [2024-11-25 12:17:53.976568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.357 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.357 [2024-11-25 12:17:53.985639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.357 [2024-11-25 12:17:53.985657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.357 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.357 [2024-11-25 12:17:53.994765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.357 [2024-11-25 12:17:53.994782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.357 2024/11/25 12:17:53 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.357 [2024-11-25 12:17:54.004259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.357 [2024-11-25 12:17:54.004277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.357 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.357 [2024-11-25 12:17:54.013517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.357 [2024-11-25 12:17:54.013534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.357 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.357 [2024-11-25 12:17:54.022840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.357 [2024-11-25 12:17:54.022857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.357 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.357 [2024-11-25 12:17:54.031932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.357 [2024-11-25 12:17:54.031950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.357 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.357 [2024-11-25 12:17:54.041341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.357 [2024-11-25 12:17:54.041360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.357 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.357 [2024-11-25 12:17:54.051059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.357 [2024-11-25 12:17:54.051077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.357 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.357 [2024-11-25 12:17:54.060602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.357 [2024-11-25 12:17:54.060619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.357 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.357 [2024-11-25 12:17:54.070067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.357 [2024-11-25 12:17:54.070084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.357 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.357 [2024-11-25 12:17:54.079095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.357 [2024-11-25 12:17:54.079112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.357 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.357 [2024-11-25 12:17:54.088726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.357 [2024-11-25 12:17:54.088743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.357 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.357 [2024-11-25 12:17:54.097859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.357 [2024-11-25 12:17:54.097875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.357 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.357 [2024-11-25 12:17:54.107503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.357 [2024-11-25 12:17:54.107521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.357 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.357 [2024-11-25 12:17:54.116665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.357 [2024-11-25 12:17:54.116683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.357 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.357 [2024-11-25 12:17:54.126518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.357 [2024-11-25 12:17:54.126536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.357 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.357 [2024-11-25 12:17:54.135542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.357 [2024-11-25 12:17:54.135563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.357 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.357 [2024-11-25 12:17:54.145165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.357 [2024-11-25 12:17:54.145185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.358 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.358 [2024-11-25 12:17:54.154261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.358 [2024-11-25 12:17:54.154279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.358 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.358 [2024-11-25 12:17:54.163359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.358 [2024-11-25 12:17:54.163378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.358 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.358 [2024-11-25 12:17:54.171694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.358 [2024-11-25 12:17:54.171712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.358 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.358 [2024-11-25 12:17:54.181195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.358 [2024-11-25 12:17:54.181212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.358 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.358 [2024-11-25 12:17:54.190192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.358 [2024-11-25 12:17:54.190210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.358 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.358 [2024-11-25 12:17:54.199167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.358 [2024-11-25 12:17:54.199185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.358 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.615 [2024-11-25 12:17:54.208261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.615 [2024-11-25 12:17:54.208280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.615 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.615 [2024-11-25 12:17:54.217333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.615 [2024-11-25 12:17:54.217351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.615 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.615 [2024-11-25 12:17:54.227040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.615 [2024-11-25 12:17:54.227061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.615 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.615 [2024-11-25 12:17:54.236202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.615 [2024-11-25 12:17:54.236221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.615 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.615 [2024-11-25 12:17:54.245695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.615 [2024-11-25 12:17:54.245713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.615 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.615 [2024-11-25 12:17:54.255254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.615 [2024-11-25 12:17:54.255274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.615 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.615 [2024-11-25 12:17:54.264308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.615 [2024-11-25 12:17:54.264325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.615 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.615 [2024-11-25 12:17:54.273487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.615 [2024-11-25 12:17:54.273505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.616 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.616 [2024-11-25 12:17:54.282917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.616 [2024-11-25 12:17:54.282935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.616 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.616 [2024-11-25 12:17:54.292544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.616 [2024-11-25 12:17:54.292562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.616 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.616 [2024-11-25 12:17:54.301609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.616 [2024-11-25 12:17:54.301627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.616 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.616 [2024-11-25 12:17:54.310967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.616 [2024-11-25 12:17:54.310985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.616 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.616 [2024-11-25 12:17:54.319994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.616 [2024-11-25 12:17:54.320012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.616 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.616 [2024-11-25 12:17:54.329962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.616 [2024-11-25 12:17:54.329982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.616 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.616 [2024-11-25 12:17:54.339543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.616 [2024-11-25 12:17:54.339561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.616 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.616 [2024-11-25 12:17:54.348412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.616 [2024-11-25 12:17:54.348430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.616 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.616 [2024-11-25 12:17:54.358032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.616 [2024-11-25 12:17:54.358051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.616 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.616 [2024-11-25 12:17:54.367656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.616 [2024-11-25 12:17:54.367675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.616 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.616 [2024-11-25 12:17:54.377265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.616 [2024-11-25 12:17:54.377282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.616 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.616 [2024-11-25 12:17:54.386389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.616 [2024-11-25 12:17:54.386407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.616 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.616 [2024-11-25 12:17:54.395962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.616 [2024-11-25 12:17:54.395980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.616 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.616 [2024-11-25 12:17:54.404845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.616 [2024-11-25 12:17:54.404861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.616 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.616 [2024-11-25 12:17:54.413745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.616 [2024-11-25 12:17:54.413770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.616 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.616 [2024-11-25 12:17:54.423172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.616 [2024-11-25 12:17:54.423190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.616 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.616 [2024-11-25 12:17:54.432123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.616 [2024-11-25 12:17:54.432139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.616 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.616 [2024-11-25 12:17:54.441085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.616 [2024-11-25 12:17:54.441103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.616 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.616 [2024-11-25 12:17:54.450508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.616 [2024-11-25 12:17:54.450527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.616 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.616 [2024-11-25 12:17:54.459925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.616 [2024-11-25 12:17:54.459948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.616 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.616 [2024-11-25 12:17:54.468661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.616 [2024-11-25 12:17:54.468686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.616 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.616 [2024-11-25 12:17:54.477848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.616 [2024-11-25 12:17:54.477871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.616 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.616 [2024-11-25 12:17:54.486992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.616 [2024-11-25 12:17:54.487012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.874 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.874 [2024-11-25 12:17:54.496190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.874 [2024-11-25 12:17:54.496209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.874 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.874 [2024-11-25 12:17:54.505526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.874 [2024-11-25 12:17:54.505546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.874 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.874 [2024-11-25 12:17:54.514557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.874 [2024-11-25 12:17:54.514576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.874 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.874 [2024-11-25 12:17:54.523733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.874 [2024-11-25 12:17:54.523764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.874 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.874 [2024-11-25 12:17:54.533293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.874 [2024-11-25 12:17:54.533310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.874 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.874 [2024-11-25 12:17:54.542287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.874 [2024-11-25 12:17:54.542307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.874 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.874 [2024-11-25 12:17:54.551203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.874 [2024-11-25 12:17:54.551222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.874 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.874 [2024-11-25 12:17:54.560255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.874 [2024-11-25 12:17:54.560272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.874 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.874 [2024-11-25 12:17:54.569241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.874 [2024-11-25 12:17:54.569258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.874 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.874 [2024-11-25 12:17:54.578796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.874 [2024-11-25 12:17:54.578814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.874 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.874 [2024-11-25 12:17:54.588069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.874 [2024-11-25 12:17:54.588087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.874 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.875 [2024-11-25 12:17:54.597498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.875 [2024-11-25 12:17:54.597515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.875 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.875 [2024-11-25 12:17:54.607132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.875 [2024-11-25 12:17:54.607150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.875 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.875 [2024-11-25 12:17:54.616774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.875 [2024-11-25 12:17:54.616791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.875 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.875 [2024-11-25 12:17:54.625668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.875 [2024-11-25 12:17:54.625686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.875 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.875 [2024-11-25 12:17:54.635054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.875 [2024-11-25 12:17:54.635074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.875 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.875 [2024-11-25 12:17:54.644128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.875 [2024-11-25 12:17:54.644145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.875 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.875 [2024-11-25 12:17:54.653503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.875 [2024-11-25 12:17:54.653520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.875 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.875 [2024-11-25 12:17:54.663029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.875 [2024-11-25 12:17:54.663048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.875 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.875 [2024-11-25 12:17:54.672686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.875 [2024-11-25 12:17:54.672705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.875 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.875 [2024-11-25 12:17:54.681748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.875 [2024-11-25 12:17:54.681773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.875 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.875 [2024-11-25 12:17:54.690963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.875 [2024-11-25 12:17:54.690980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.875 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.875 [2024-11-25 12:17:54.700424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.875 [2024-11-25 12:17:54.700444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.875 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.875 [2024-11-25 12:17:54.709399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.875 [2024-11-25 12:17:54.709416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.875 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.875 [2024-11-25 12:17:54.718459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.875 [2024-11-25 12:17:54.718477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.875 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.875 [2024-11-25 12:17:54.727377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.875 [2024-11-25 12:17:54.727397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.875 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.875 [2024-11-25 12:17:54.736847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.875 [2024-11-25 12:17:54.736865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:58.875 17664.00 IOPS, 138.00 MiB/s [2024-11-25T12:17:54.751Z] 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:58.875 [2024-11-25 12:17:54.745955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:58.875 [2024-11-25 12:17:54.745971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.134 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.134 [2024-11-25 12:17:54.755122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.134 [2024-11-25 12:17:54.755140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.134 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.134 [2024-11-25 12:17:54.764337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.134 [2024-11-25 12:17:54.764356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.134 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.134 [2024-11-25 12:17:54.773550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.134 [2024-11-25 12:17:54.773568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.134 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.134 [2024-11-25 12:17:54.782641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.134 [2024-11-25 12:17:54.782659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.134 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.134 [2024-11-25 12:17:54.791628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.134 [2024-11-25 12:17:54.791647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.134 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.134 [2024-11-25 12:17:54.800525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.134 [2024-11-25 12:17:54.800543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.134 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.134 [2024-11-25 12:17:54.809949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.134 [2024-11-25 12:17:54.809966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.134 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.134 [2024-11-25 12:17:54.818928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.134 [2024-11-25 12:17:54.818945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.134 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.134 [2024-11-25 12:17:54.828292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.134 [2024-11-25 12:17:54.828311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.134 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.134 [2024-11-25 12:17:54.837274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.134 [2024-11-25 12:17:54.837292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.134 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.134 [2024-11-25 12:17:54.846065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.134 [2024-11-25 12:17:54.846082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.134 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.134 [2024-11-25 12:17:54.855173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.134 [2024-11-25 12:17:54.855192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.134 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.134 [2024-11-25 12:17:54.864286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.134 [2024-11-25 12:17:54.864304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.134 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.134 [2024-11-25 12:17:54.874188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.134 [2024-11-25 12:17:54.874205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.134 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.134 [2024-11-25 12:17:54.883611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.135 [2024-11-25 12:17:54.883629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.135 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.135 [2024-11-25 12:17:54.892846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.135 [2024-11-25 12:17:54.892864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.135 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.135 [2024-11-25 12:17:54.901903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.135 [2024-11-25 12:17:54.901928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.135 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.135 [2024-11-25 12:17:54.910927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.135 [2024-11-25 12:17:54.910945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.135 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.135 [2024-11-25 12:17:54.920377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.135 [2024-11-25 12:17:54.920395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.135 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.135 [2024-11-25 12:17:54.929459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.135 [2024-11-25 12:17:54.929477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.135 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.135 [2024-11-25 12:17:54.939054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.135 [2024-11-25 12:17:54.939075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.135 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.135 [2024-11-25 12:17:54.948005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.135 [2024-11-25 12:17:54.948024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.135 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.135 [2024-11-25 12:17:54.956881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.135 [2024-11-25 12:17:54.956898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.135 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.135 [2024-11-25 12:17:54.966502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.135 [2024-11-25 12:17:54.966521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.135 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.135 [2024-11-25 12:17:54.975542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.135 [2024-11-25 12:17:54.975560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.135 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.135 [2024-11-25 12:17:54.984574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.135 [2024-11-25 12:17:54.984592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.135 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.135 [2024-11-25 12:17:54.994005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.135 [2024-11-25 12:17:54.994022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.135 2024/11/25 12:17:54 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.135 [2024-11-25 12:17:55.003026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.135 [2024-11-25 12:17:55.003044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.135 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.394 [2024-11-25 12:17:55.012188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.394 [2024-11-25 12:17:55.012206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.394 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.394 [2024-11-25 12:17:55.021784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.394 [2024-11-25 12:17:55.021802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.394 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.394 [2024-11-25 12:17:55.030942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.394 [2024-11-25 12:17:55.030960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.394 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.394 [2024-11-25 12:17:55.040414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.394 [2024-11-25 12:17:55.040432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.394 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.394 [2024-11-25 12:17:55.049968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.394 [2024-11-25 12:17:55.049985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.394 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.394 [2024-11-25 12:17:55.059725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.394 [2024-11-25 12:17:55.059743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.394 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.394 [2024-11-25 12:17:55.068941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.394 [2024-11-25 12:17:55.068958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.394 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.394 [2024-11-25 12:17:55.078085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.394 [2024-11-25 12:17:55.078103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.394 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.394 [2024-11-25 12:17:55.087044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.394 [2024-11-25 12:17:55.087062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.394 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.394 [2024-11-25 12:17:55.096665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.394 [2024-11-25 12:17:55.096683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.394 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.394 [2024-11-25 12:17:55.106198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.394 [2024-11-25 12:17:55.106215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.394 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.394 [2024-11-25 12:17:55.115366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.394 [2024-11-25 12:17:55.115385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.394 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.394 [2024-11-25 12:17:55.124553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.394 [2024-11-25 12:17:55.124573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.394 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.394 [2024-11-25 12:17:55.133792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.394 [2024-11-25 12:17:55.133809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.394 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.394 [2024-11-25 12:17:55.143319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.394 [2024-11-25 12:17:55.143337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.394 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.394 [2024-11-25 12:17:55.152433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.394 [2024-11-25 12:17:55.152451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.394 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.394 [2024-11-25 12:17:55.161582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.394 [2024-11-25 12:17:55.161600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.394 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.394 [2024-11-25 12:17:55.170678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.394 [2024-11-25 12:17:55.170695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.394 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.394 [2024-11-25 12:17:55.180043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.395 [2024-11-25 12:17:55.180060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.395 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.395 [2024-11-25 12:17:55.189187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.395 [2024-11-25 12:17:55.189204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.395 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.395 [2024-11-25 12:17:55.198357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.395 [2024-11-25 12:17:55.198375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.395 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.395 [2024-11-25 12:17:55.207391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.395 [2024-11-25 12:17:55.207409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.395 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.395 [2024-11-25 12:17:55.216439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.395 [2024-11-25 12:17:55.216458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.395 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.395 [2024-11-25 12:17:55.225559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.395 [2024-11-25 12:17:55.225577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.395 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.395 [2024-11-25 12:17:55.234688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.395 [2024-11-25 12:17:55.234707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.395 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.395 [2024-11-25 12:17:55.243581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.395 [2024-11-25 12:17:55.243600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.395 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.395 [2024-11-25 12:17:55.253069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.395 [2024-11-25 12:17:55.253087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.395 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.395 [2024-11-25 12:17:55.262634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.395 [2024-11-25 12:17:55.262652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.395 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.655 [2024-11-25 12:17:55.271980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.655 [2024-11-25 12:17:55.271998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.655 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.655 [2024-11-25 12:17:55.281286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.655 [2024-11-25 12:17:55.281306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.655 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.655 [2024-11-25 12:17:55.290338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.655 [2024-11-25 12:17:55.290357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.655 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.655 [2024-11-25 12:17:55.299253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.655 [2024-11-25 12:17:55.299271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.655 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.655 [2024-11-25 12:17:55.308334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.655 [2024-11-25 12:17:55.308351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.655 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.655 [2024-11-25 12:17:55.317926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.655 [2024-11-25 12:17:55.317943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.655 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.655 [2024-11-25 12:17:55.327012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.655 [2024-11-25 12:17:55.327034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.655 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.655 [2024-11-25 12:17:55.335989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.655 [2024-11-25 12:17:55.336007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.655 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.655 [2024-11-25 12:17:55.345360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.655 [2024-11-25 12:17:55.345378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.655 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.655 [2024-11-25 12:17:55.353816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.655 [2024-11-25 12:17:55.353834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.655 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.655 [2024-11-25 12:17:55.363343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.655 [2024-11-25 12:17:55.363361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.655 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.655 [2024-11-25 12:17:55.372307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.655 [2024-11-25 12:17:55.372325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.655 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.655 [2024-11-25 12:17:55.381591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.655 [2024-11-25 12:17:55.381609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.655 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.655 [2024-11-25 12:17:55.390629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.655 [2024-11-25 12:17:55.390647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.655 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.655 [2024-11-25 12:17:55.399534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.655 [2024-11-25 12:17:55.399553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.655 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.655 [2024-11-25 12:17:55.409054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.655 [2024-11-25 12:17:55.409072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.655 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.655 [2024-11-25 12:17:55.418475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.655 [2024-11-25 12:17:55.418493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.655 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.655 [2024-11-25 12:17:55.427721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.655 [2024-11-25 12:17:55.427740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.655 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.655 [2024-11-25 12:17:55.436679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.655 [2024-11-25 12:17:55.436698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.655 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.655 [2024-11-25 12:17:55.445605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.655 [2024-11-25 12:17:55.445625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.655 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.655 [2024-11-25 12:17:55.454666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.655 [2024-11-25 12:17:55.454683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.655 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.655 [2024-11-25 12:17:55.463733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.655 [2024-11-25 12:17:55.463762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.655 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.655 [2024-11-25 12:17:55.473127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.655 [2024-11-25 12:17:55.473145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.655 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.655 [2024-11-25 12:17:55.482109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.655 [2024-11-25 12:17:55.482126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.655 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.655 [2024-11-25 12:17:55.491254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.655 [2024-11-25 12:17:55.491273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.656 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.656 [2024-11-25 12:17:55.500944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.656 [2024-11-25 12:17:55.500964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.656 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.656 [2024-11-25 12:17:55.510156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.656 [2024-11-25 12:17:55.510176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.656 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.656 [2024-11-25 12:17:55.519803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.656 [2024-11-25 12:17:55.519822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.656 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.914 [2024-11-25 12:17:55.529107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.914 [2024-11-25 12:17:55.529125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.914 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.914 [2024-11-25 12:17:55.538363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.914 [2024-11-25 12:17:55.538381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.914 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.914 [2024-11-25 12:17:55.547607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.914 [2024-11-25 12:17:55.547626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.914 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.914 [2024-11-25 12:17:55.557235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.914 [2024-11-25 12:17:55.557253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.914 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.914 [2024-11-25 12:17:55.567165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.914 [2024-11-25 12:17:55.567185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.914 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.914 [2024-11-25 12:17:55.576266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.914 [2024-11-25 12:17:55.576283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.914 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.914 [2024-11-25 12:17:55.585246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.914 [2024-11-25 12:17:55.585263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.914 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.914 [2024-11-25 12:17:55.594571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.914 [2024-11-25 12:17:55.594588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.914 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.914 [2024-11-25 12:17:55.603757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.914 [2024-11-25 12:17:55.603774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.915 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.915 [2024-11-25 12:17:55.612696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.915 [2024-11-25 12:17:55.612715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.915 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.915 [2024-11-25 12:17:55.622097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.915 [2024-11-25 12:17:55.622114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.915 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.915 [2024-11-25 12:17:55.631167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.915 [2024-11-25 12:17:55.631185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.915 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.915 [2024-11-25 12:17:55.639522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.915 [2024-11-25 12:17:55.639541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.915 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.915 [2024-11-25 12:17:55.648943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.915 [2024-11-25 12:17:55.648962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.915 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.915 [2024-11-25 12:17:55.658218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.915 [2024-11-25 12:17:55.658237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.915 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.915 [2024-11-25 12:17:55.667831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.915 [2024-11-25 12:17:55.667849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.915 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.915 [2024-11-25 12:17:55.676766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.915 [2024-11-25 12:17:55.676784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.915 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.915 [2024-11-25 12:17:55.686365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.915 [2024-11-25 12:17:55.686383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.915 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.915 [2024-11-25 12:17:55.695371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.915 [2024-11-25 12:17:55.695390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.915 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.915 [2024-11-25 12:17:55.705044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.915 [2024-11-25 12:17:55.705063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.915 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.915 [2024-11-25 12:17:55.714064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.915 [2024-11-25 12:17:55.714081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.915 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.915 [2024-11-25 12:17:55.723206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.915 [2024-11-25 12:17:55.723225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.915 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.915 [2024-11-25 12:17:55.732262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.915 [2024-11-25 12:17:55.732280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.915 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.915 17641.33 IOPS, 137.82 MiB/s [2024-11-25T12:17:55.791Z] [2024-11-25 12:17:55.741809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.915 [2024-11-25 12:17:55.741826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.915 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.915 [2024-11-25 12:17:55.750719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.915 [2024-11-25 12:17:55.750738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.915 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.915 [2024-11-25 12:17:55.760632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.915 [2024-11-25 12:17:55.760650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.915 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.915 [2024-11-25 12:17:55.769280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.915 [2024-11-25 12:17:55.769299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.915 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.915 [2024-11-25 12:17:55.778843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.915 [2024-11-25 12:17:55.778861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:59.915 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:18:59.915 [2024-11-25 12:17:55.787924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:59.915 [2024-11-25 12:17:55.787942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.173 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.173 [2024-11-25 12:17:55.796914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.173 [2024-11-25 12:17:55.796932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.173 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.173 [2024-11-25 12:17:55.805371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.173 [2024-11-25 12:17:55.805390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.173 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.173 [2024-11-25 12:17:55.814913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.173 [2024-11-25 12:17:55.814931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.173 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.173 [2024-11-25 12:17:55.824135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.173 [2024-11-25 12:17:55.824154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.173 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.173 [2024-11-25 12:17:55.833676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.173 [2024-11-25 12:17:55.833694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.173 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.173 [2024-11-25 12:17:55.843182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.173 [2024-11-25 12:17:55.843201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.173 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.173 [2024-11-25 12:17:55.853321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.173 [2024-11-25 12:17:55.853339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.173 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.173 [2024-11-25 12:17:55.862520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.173 [2024-11-25 12:17:55.862539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.173 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.174 [2024-11-25 12:17:55.871451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.174 [2024-11-25 12:17:55.871470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.174 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.174 [2024-11-25 12:17:55.880993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.174 [2024-11-25 12:17:55.881010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.174 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.174 [2024-11-25 12:17:55.890285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.174 [2024-11-25 12:17:55.890303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.174 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.174 [2024-11-25 12:17:55.899428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.174 [2024-11-25 12:17:55.899446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.174 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.174 [2024-11-25 12:17:55.908398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.174 [2024-11-25 12:17:55.908415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.174 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.174 [2024-11-25 12:17:55.918075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.174 [2024-11-25 12:17:55.918093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.174 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.174 [2024-11-25 12:17:55.927672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.174 [2024-11-25 12:17:55.927690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.174 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.174 [2024-11-25 12:17:55.936832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.174 [2024-11-25 12:17:55.936850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.174 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.174 [2024-11-25 12:17:55.945789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.174 [2024-11-25 12:17:55.945807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.174 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.174 [2024-11-25 12:17:55.954730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.174 [2024-11-25 12:17:55.954756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.174 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.174 [2024-11-25 12:17:55.963558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.174 [2024-11-25 12:17:55.963576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.174 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.174 [2024-11-25 12:17:55.972531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.174 [2024-11-25 12:17:55.972549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.174 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.174 [2024-11-25 12:17:55.981740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.174 [2024-11-25 12:17:55.981770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.174 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.174 [2024-11-25 12:17:55.990690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.174 [2024-11-25 12:17:55.990709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.174 2024/11/25 12:17:55 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.174 [2024-11-25 12:17:55.999907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.174 [2024-11-25 12:17:55.999926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.174 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.174 [2024-11-25 12:17:56.009693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.174 [2024-11-25 12:17:56.009711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.174 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.174 [2024-11-25 12:17:56.018159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.174 [2024-11-25 12:17:56.018177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.174 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.174 [2024-11-25 12:17:56.027026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.174 [2024-11-25 12:17:56.027044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.174 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.174 [2024-11-25 12:17:56.035836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.174 [2024-11-25 12:17:56.035854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.174 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.174 [2024-11-25 12:17:56.045442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.174 [2024-11-25 12:17:56.045460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.433 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.433 [2024-11-25 12:17:56.054842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.433 [2024-11-25 12:17:56.054860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.433 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.433 [2024-11-25 12:17:56.064485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.433 [2024-11-25 12:17:56.064503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.433 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.433 [2024-11-25 12:17:56.074380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.433 [2024-11-25 12:17:56.074399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.433 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.433 [2024-11-25 12:17:56.083650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.433 [2024-11-25 12:17:56.083670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.433 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.433 [2024-11-25 12:17:56.092671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.433 [2024-11-25 12:17:56.092689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.433 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.433 [2024-11-25 12:17:56.102131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.433 [2024-11-25 12:17:56.102147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.433 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.433 [2024-11-25 12:17:56.111295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.433 [2024-11-25 12:17:56.111313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.433 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.433 [2024-11-25 12:17:56.120452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.433 [2024-11-25 12:17:56.120470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.433 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.433 [2024-11-25 12:17:56.129783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.433 [2024-11-25 12:17:56.129803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.433 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.433 [2024-11-25 12:17:56.138914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.433 [2024-11-25 12:17:56.138932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.433 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.433 [2024-11-25 12:17:56.148488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.433 [2024-11-25 12:17:56.148508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.433 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.433 [2024-11-25 12:17:56.158031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.433 [2024-11-25 12:17:56.158050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.433 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.433 [2024-11-25 12:17:56.167192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.433 [2024-11-25 12:17:56.167212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.433 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.433 [2024-11-25 12:17:56.176172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.433 [2024-11-25 12:17:56.176190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.433 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.433 [2024-11-25 12:17:56.185607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.433 [2024-11-25 12:17:56.185627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.433 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.433 [2024-11-25 12:17:56.194719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.433 [2024-11-25 12:17:56.194737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.433 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.433 [2024-11-25 12:17:56.204253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.433 [2024-11-25 12:17:56.204272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.433 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.433 [2024-11-25 12:17:56.217988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.433 [2024-11-25 12:17:56.218006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.433 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.433 [2024-11-25 12:17:56.226522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.433 [2024-11-25 12:17:56.226540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.433 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.433 [2024-11-25 12:17:56.235792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.433 [2024-11-25 12:17:56.235811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.433 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.433 [2024-11-25 12:17:56.244934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.433 [2024-11-25 12:17:56.244952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.433 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.433 [2024-11-25 12:17:56.253955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.433 [2024-11-25 12:17:56.253973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.433 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.433 [2024-11-25 12:17:56.263720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.433 [2024-11-25 12:17:56.263741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.433 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.433 [2024-11-25 12:17:56.272856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.433 [2024-11-25 12:17:56.272871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.433 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.433 [2024-11-25 12:17:56.282006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.433 [2024-11-25 12:17:56.282022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.434 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.434 [2024-11-25 12:17:56.291227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.434 [2024-11-25 12:17:56.291249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.434 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.434 [2024-11-25 12:17:56.300970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.434 [2024-11-25 12:17:56.300988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.434 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.703 [2024-11-25 12:17:56.309336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.703 [2024-11-25 12:17:56.309354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.703 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.703 [2024-11-25 12:17:56.318796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.703 [2024-11-25 12:17:56.318813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.703 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.703 [2024-11-25 12:17:56.328654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.703 [2024-11-25 12:17:56.328673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.703 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.703 [2024-11-25 12:17:56.337847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.703 [2024-11-25 12:17:56.337865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.703 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.703 [2024-11-25 12:17:56.347238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.703 [2024-11-25 12:17:56.347257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.703 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.703 [2024-11-25 12:17:56.356279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.703 [2024-11-25 12:17:56.356296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.703 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.703 [2024-11-25 12:17:56.365303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.703 [2024-11-25 12:17:56.365321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.703 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.703 [2024-11-25 12:17:56.374505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.703 [2024-11-25 12:17:56.374523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.704 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.704 [2024-11-25 12:17:56.384259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.704 [2024-11-25 12:17:56.384277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.704 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.704 [2024-11-25 12:17:56.393674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.704 [2024-11-25 12:17:56.393693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.704 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.704 [2024-11-25 12:17:56.402237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.704 [2024-11-25 12:17:56.402254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.704 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.704 [2024-11-25 12:17:56.411351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.704 [2024-11-25 12:17:56.411369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.704 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.704 [2024-11-25 12:17:56.421011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.704 [2024-11-25 12:17:56.421028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.704 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.704 [2024-11-25 12:17:56.430016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.704 [2024-11-25 12:17:56.430033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.704 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.704 [2024-11-25 12:17:56.439317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.704 [2024-11-25 12:17:56.439335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.704 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.704 [2024-11-25 12:17:56.448784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.704 [2024-11-25 12:17:56.448806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.704 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.704 [2024-11-25 12:17:56.463176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.704 [2024-11-25 12:17:56.463197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.704 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.704 [2024-11-25 12:17:56.476860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.704 [2024-11-25 12:17:56.476880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.704 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.704 [2024-11-25 12:17:56.486264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.704 [2024-11-25 12:17:56.486282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.704 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.704 [2024-11-25 12:17:56.495673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.704 [2024-11-25 12:17:56.495691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.704 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.704 [2024-11-25 12:17:56.505048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.704 [2024-11-25 12:17:56.505065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.704 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.704 [2024-11-25 12:17:56.514835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.704 [2024-11-25 12:17:56.514854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.704 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.704 [2024-11-25 12:17:56.523972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.704 [2024-11-25 12:17:56.523990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.704 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.704 [2024-11-25 12:17:56.533138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.704 [2024-11-25 12:17:56.533156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.704 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.704 [2024-11-25 12:17:56.542327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.704 [2024-11-25 12:17:56.542344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.704 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.704 [2024-11-25 12:17:56.551820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.704 [2024-11-25 12:17:56.551837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.704 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.704 [2024-11-25 12:17:56.561188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.704 [2024-11-25 12:17:56.561206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.704 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.704 [2024-11-25 12:17:56.570552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.704 [2024-11-25 12:17:56.570569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.704 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.963 [2024-11-25 12:17:56.580340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.963 [2024-11-25 12:17:56.580357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.963 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.963 [2024-11-25 12:17:56.589718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.963 [2024-11-25 12:17:56.589736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.963 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.963 [2024-11-25 12:17:56.599313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.963 [2024-11-25 12:17:56.599332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.963 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.963 [2024-11-25 12:17:56.608911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.963 [2024-11-25 12:17:56.608929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.963 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.963 [2024-11-25 12:17:56.617585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.963 [2024-11-25 12:17:56.617604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.963 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.963 [2024-11-25 12:17:56.626943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.963 [2024-11-25 12:17:56.626961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.963 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.963 [2024-11-25 12:17:56.636431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.963 [2024-11-25 12:17:56.636449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.963 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.963 [2024-11-25 12:17:56.646241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.963 [2024-11-25 12:17:56.646262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.963 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.963 [2024-11-25 12:17:56.655684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.963 [2024-11-25 12:17:56.655706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.963 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.963 [2024-11-25 12:17:56.664875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.963 [2024-11-25 12:17:56.664894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.963 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.963 [2024-11-25 12:17:56.673786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.963 [2024-11-25 12:17:56.673807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.963 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.963 [2024-11-25 12:17:56.683862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.963 [2024-11-25 12:17:56.683880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.963 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.963 [2024-11-25 12:17:56.693373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.963 [2024-11-25 12:17:56.693392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.963 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.963 [2024-11-25 12:17:56.702609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.963 [2024-11-25 12:17:56.702628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.963 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.963 [2024-11-25 12:17:56.711842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.964 [2024-11-25 12:17:56.711861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.964 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.964 [2024-11-25 12:17:56.721372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.964 [2024-11-25 12:17:56.721391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.964 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.964 [2024-11-25 12:17:56.730946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.964 [2024-11-25 12:17:56.730964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.964 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.964 17551.00 IOPS, 137.12 MiB/s [2024-11-25T12:17:56.840Z] [2024-11-25 12:17:56.740717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.964 [2024-11-25 12:17:56.740737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.964 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.964 [2024-11-25 12:17:56.750574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.964 [2024-11-25 12:17:56.750596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.964 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.964 [2024-11-25 12:17:56.759965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.964 [2024-11-25 12:17:56.759984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.964 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.964 [2024-11-25 12:17:56.769258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.964 [2024-11-25 12:17:56.769276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.964 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.964 [2024-11-25 12:17:56.778759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.964 [2024-11-25 12:17:56.778777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.964 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.964 [2024-11-25 12:17:56.788167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.964 [2024-11-25 12:17:56.788186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.964 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.964 [2024-11-25 12:17:56.797543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.964 [2024-11-25 12:17:56.797565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.964 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.964 [2024-11-25 12:17:56.807301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.964 [2024-11-25 12:17:56.807320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.964 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.964 [2024-11-25 12:17:56.816647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.964 [2024-11-25 12:17:56.816665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.964 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.964 [2024-11-25 12:17:56.825971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.964 [2024-11-25 12:17:56.825990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:00.964 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:00.964 [2024-11-25 12:17:56.835250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:00.964 [2024-11-25 12:17:56.835269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.222 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.222 [2024-11-25 12:17:56.844655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.222 [2024-11-25 12:17:56.844679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.222 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.222 [2024-11-25 12:17:56.853926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.222 [2024-11-25 12:17:56.853947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.222 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.222 [2024-11-25 12:17:56.863295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.222 [2024-11-25 12:17:56.863314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.222 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.222 [2024-11-25 12:17:56.872878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.222 [2024-11-25 12:17:56.872896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.222 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.222 [2024-11-25 12:17:56.882243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.222 [2024-11-25 12:17:56.882267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.222 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.222 [2024-11-25 12:17:56.891618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.222 [2024-11-25 12:17:56.891638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.222 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.222 [2024-11-25 12:17:56.901029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.222 [2024-11-25 12:17:56.901048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.222 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.223 [2024-11-25 12:17:56.910932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.223 [2024-11-25 12:17:56.910951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.223 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.223 [2024-11-25 12:17:56.920316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.223 [2024-11-25 12:17:56.920334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.223 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.223 [2024-11-25 12:17:56.934576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.223 [2024-11-25 12:17:56.934598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.223 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.223 [2024-11-25 12:17:56.943567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.223 [2024-11-25 12:17:56.943589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.223 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.223 [2024-11-25 12:17:56.953692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.223 [2024-11-25 12:17:56.953712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.223 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.223 [2024-11-25 12:17:56.963207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.223 [2024-11-25 12:17:56.963226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.223 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.223 [2024-11-25 12:17:56.972865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.223 [2024-11-25 12:17:56.972890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.223 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.223 [2024-11-25 12:17:56.982468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.223 [2024-11-25 12:17:56.982487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.223 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.223 [2024-11-25 12:17:56.991824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.223 [2024-11-25 12:17:56.991843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.223 2024/11/25 12:17:56 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.223 [2024-11-25 12:17:57.001385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.223 [2024-11-25 12:17:57.001407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.223 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.223 [2024-11-25 12:17:57.010627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.223 [2024-11-25 12:17:57.010646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.223 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.223 [2024-11-25 12:17:57.024857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.223 [2024-11-25 12:17:57.024877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.223 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.223 [2024-11-25 12:17:57.033708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.223 [2024-11-25 12:17:57.033727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.223 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.223 [2024-11-25 12:17:57.043663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.223 [2024-11-25 12:17:57.043682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.223 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.223 [2024-11-25 12:17:57.053281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.223 [2024-11-25 12:17:57.053298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.223 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.223 [2024-11-25 12:17:57.063171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.223 [2024-11-25 12:17:57.063190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.223 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.223 [2024-11-25 12:17:57.072709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.223 [2024-11-25 12:17:57.072728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.223 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.223 [2024-11-25 12:17:57.082054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.223 [2024-11-25 12:17:57.082073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.223 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.223 [2024-11-25 12:17:57.090769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.223 [2024-11-25 12:17:57.090790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.223 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.482 [2024-11-25 12:17:57.099872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.482 [2024-11-25 12:17:57.099895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.482 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.482 [2024-11-25 12:17:57.109103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.482 [2024-11-25 12:17:57.109121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.482 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.482 [2024-11-25 12:17:57.118952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.482 [2024-11-25 12:17:57.118975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.482 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.482 [2024-11-25 12:17:57.128374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.482 [2024-11-25 12:17:57.128395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.482 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.482 [2024-11-25 12:17:57.137896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.482 [2024-11-25 12:17:57.137914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.482 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.482 [2024-11-25 12:17:57.147615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.482 [2024-11-25 12:17:57.147634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.482 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.482 [2024-11-25 12:17:57.156992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.482 [2024-11-25 12:17:57.157011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.482 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.482 [2024-11-25 12:17:57.166557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.482 [2024-11-25 12:17:57.166578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.482 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.482 [2024-11-25 12:17:57.175458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.482 [2024-11-25 12:17:57.175484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.482 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.482 [2024-11-25 12:17:57.185461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.482 [2024-11-25 12:17:57.185479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.482 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.482 [2024-11-25 12:17:57.194957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.482 [2024-11-25 12:17:57.194976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.482 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.482 [2024-11-25 12:17:57.204826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.482 [2024-11-25 12:17:57.204844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.482 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.482 [2024-11-25 12:17:57.214908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.482 [2024-11-25 12:17:57.214926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.483 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.483 [2024-11-25 12:17:57.224383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.483 [2024-11-25 12:17:57.224402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.483 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.483 [2024-11-25 12:17:57.233550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.483 [2024-11-25 12:17:57.233568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.483 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.483 [2024-11-25 12:17:57.242958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.483 [2024-11-25 12:17:57.242979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.483 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.483 [2024-11-25 12:17:57.252394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.483 [2024-11-25 12:17:57.252419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.483 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.483 [2024-11-25 12:17:57.261873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.483 [2024-11-25 12:17:57.261893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.483 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.483 [2024-11-25 12:17:57.271415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.483 [2024-11-25 12:17:57.271436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.483 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.483 [2024-11-25 12:17:57.280282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.483 [2024-11-25 12:17:57.280299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.483 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.483 [2024-11-25 12:17:57.289692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.483 [2024-11-25 12:17:57.289710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.483 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.483 [2024-11-25 12:17:57.299825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.483 [2024-11-25 12:17:57.299848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.483 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.483 [2024-11-25 12:17:57.309469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.483 [2024-11-25 12:17:57.309488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.483 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.483 [2024-11-25 12:17:57.319032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.483 [2024-11-25 12:17:57.319050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.483 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.483 [2024-11-25 12:17:57.328916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.483 [2024-11-25 12:17:57.328937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.483 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.483 [2024-11-25 12:17:57.338255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.483 [2024-11-25 12:17:57.338273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.483 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.483 [2024-11-25 12:17:57.347988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.483 [2024-11-25 12:17:57.348009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.483 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.741 [2024-11-25 12:17:57.357501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.741 [2024-11-25 12:17:57.357520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.741 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.741 [2024-11-25 12:17:57.367490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.741 [2024-11-25 12:17:57.367507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.741 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.741 [2024-11-25 12:17:57.376100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.741 [2024-11-25 12:17:57.376116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.742 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.742 [2024-11-25 12:17:57.385337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.742 [2024-11-25 12:17:57.385354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.742 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.742 [2024-11-25 12:17:57.394946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.742 [2024-11-25 12:17:57.394964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.742 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.742 [2024-11-25 12:17:57.403912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.742 [2024-11-25 12:17:57.403929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.742 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.742 [2024-11-25 12:17:57.412879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.742 [2024-11-25 12:17:57.412895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.742 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.742 [2024-11-25 12:17:57.422236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.742 [2024-11-25 12:17:57.422253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.742 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.742 [2024-11-25 12:17:57.431489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.742 [2024-11-25 12:17:57.431507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.742 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.742 [2024-11-25 12:17:57.440703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.742 [2024-11-25 12:17:57.440721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.742 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.742 [2024-11-25 12:17:57.450488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.742 [2024-11-25 12:17:57.450507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.742 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.742 [2024-11-25 12:17:57.459711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.742 [2024-11-25 12:17:57.459730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.742 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.742 [2024-11-25 12:17:57.468981] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.742 [2024-11-25 12:17:57.468997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.742 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.742 [2024-11-25 12:17:57.478063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.742 [2024-11-25 12:17:57.478081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.742 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.742 [2024-11-25 12:17:57.487228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.742 [2024-11-25 12:17:57.487245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.742 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.742 [2024-11-25 12:17:57.496504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.742 [2024-11-25 12:17:57.496530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.742 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.742 [2024-11-25 12:17:57.506294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.742 [2024-11-25 12:17:57.506313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.742 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.742 [2024-11-25 12:17:57.515705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.742 [2024-11-25 12:17:57.515724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.742 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.742 [2024-11-25 12:17:57.525146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.742 [2024-11-25 12:17:57.525167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.742 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.742 [2024-11-25 12:17:57.534807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.742 [2024-11-25 12:17:57.534825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.742 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.742 [2024-11-25 12:17:57.544559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.742 [2024-11-25 12:17:57.544583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.742 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.742 [2024-11-25 12:17:57.554064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.742 [2024-11-25 12:17:57.554084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.742 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.742 [2024-11-25 12:17:57.563486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.742 [2024-11-25 12:17:57.563506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.742 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.742 [2024-11-25 12:17:57.572950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.742 [2024-11-25 12:17:57.572969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.742 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.742 [2024-11-25 12:17:57.582199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.742 [2024-11-25 12:17:57.582218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.742 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.742 [2024-11-25 12:17:57.591489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.742 [2024-11-25 12:17:57.591507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.742 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.742 [2024-11-25 12:17:57.600703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.742 [2024-11-25 12:17:57.600722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.743 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:01.743 [2024-11-25 12:17:57.609348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:01.743 [2024-11-25 12:17:57.609366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:01.743 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:02.001 [2024-11-25 12:17:57.619219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.001 [2024-11-25 12:17:57.619237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.001 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:02.001 [2024-11-25 12:17:57.628651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.001 [2024-11-25 12:17:57.628671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.001 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:02.001 [2024-11-25 12:17:57.638486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.001 [2024-11-25 12:17:57.638505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.001 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:02.001 [2024-11-25 12:17:57.647796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.001 [2024-11-25 12:17:57.647814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.001 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:02.001 [2024-11-25 12:17:57.656854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.001 [2024-11-25 12:17:57.656871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.001 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:02.001 [2024-11-25 12:17:57.666018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.001 [2024-11-25 12:17:57.666038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.001 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:02.001 [2024-11-25 12:17:57.675317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.001 [2024-11-25 12:17:57.675340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.002 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:02.002 [2024-11-25 12:17:57.684555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.002 [2024-11-25 12:17:57.684573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.002 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:02.002 [2024-11-25 12:17:57.693990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.002 [2024-11-25 12:17:57.694008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.002 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:02.002 [2024-11-25 12:17:57.703353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.002 [2024-11-25 12:17:57.703371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.002 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:02.002 [2024-11-25 12:17:57.712605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.002 [2024-11-25 12:17:57.712623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.002 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:02.002 [2024-11-25 12:17:57.721893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.002 [2024-11-25 12:17:57.721911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.002 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:02.002 [2024-11-25 12:17:57.731166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.002 [2024-11-25 12:17:57.731185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.002 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:02.002 17397.00 IOPS, 135.91 MiB/s [2024-11-25T12:17:57.878Z] [2024-11-25 12:17:57.740626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.002 [2024-11-25 12:17:57.740645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.002 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:02.002 00:19:02.002 Latency(us) 00:19:02.002 [2024-11-25T12:17:57.878Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.002 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:19:02.002 Nvme1n1 : 5.01 17397.15 135.92 0.00 0.00 7350.85 2003.89 12300.60 00:19:02.002 [2024-11-25T12:17:57.878Z] =================================================================================================================== 00:19:02.002 [2024-11-25T12:17:57.878Z] Total : 17397.15 135.92 0.00 0.00 7350.85 2003.89 12300.60 00:19:02.002 [2024-11-25 12:17:57.746276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.002 [2024-11-25 12:17:57.746289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.002 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:02.002 [2024-11-25 12:17:57.754274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.002 [2024-11-25 12:17:57.754291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.002 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:02.002 [2024-11-25 12:17:57.762275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.002 [2024-11-25 12:17:57.762290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.002 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:02.002 [2024-11-25 12:17:57.770279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.002 [2024-11-25 12:17:57.770293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.002 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:02.002 [2024-11-25 12:17:57.782287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.002 [2024-11-25 12:17:57.782303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.002 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:02.002 [2024-11-25 12:17:57.790281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.002 [2024-11-25 12:17:57.790294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.002 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:02.002 [2024-11-25 12:17:57.798293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.002 [2024-11-25 12:17:57.798310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.002 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:02.002 [2024-11-25 12:17:57.806287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.002 [2024-11-25 12:17:57.806301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.002 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:02.002 [2024-11-25 12:17:57.814297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.002 [2024-11-25 12:17:57.814311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.002 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:02.002 [2024-11-25 12:17:57.822285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.002 [2024-11-25 12:17:57.822298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.002 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:02.002 [2024-11-25 12:17:57.830290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.002 [2024-11-25 12:17:57.830308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.002 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:02.002 [2024-11-25 12:17:57.838288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.002 [2024-11-25 12:17:57.838300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.002 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:02.002 [2024-11-25 12:17:57.850295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:19:02.002 [2024-11-25 12:17:57.850307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.002 2024/11/25 12:17:57 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:02.002 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (68466) - No such process 00:19:02.002 12:17:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 68466 00:19:02.002 12:17:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:02.002 12:17:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.002 12:17:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:02.002 12:17:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.002 12:17:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:19:02.002 12:17:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.002 12:17:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:02.002 delay0 00:19:02.002 12:17:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.002 12:17:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:19:02.002 12:17:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.002 12:17:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:02.261 12:17:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.261 12:17:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:19:02.261 [2024-11-25 12:17:58.031128] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:19:08.813 Initializing NVMe Controllers 00:19:08.813 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:19:08.813 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:08.813 Initialization complete. Launching workers. 00:19:08.813 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 114 00:19:08.813 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 401, failed to submit 33 00:19:08.813 success 203, unsuccessful 198, failed 0 00:19:08.813 12:18:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:19:08.813 12:18:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:19:08.813 12:18:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:08.813 12:18:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:19:09.746 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:09.746 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:19:09.746 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:09.746 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:09.746 rmmod nvme_tcp 00:19:09.746 rmmod nvme_fabrics 00:19:09.746 rmmod nvme_keyring 00:19:09.746 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:09.746 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:19:09.746 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:19:09.746 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 68297 ']' 00:19:09.746 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 68297 00:19:09.746 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 68297 ']' 00:19:09.746 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 68297 00:19:09.746 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:19:09.746 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:09.746 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68297 00:19:09.746 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:09.746 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:09.746 killing process with pid 68297 00:19:09.746 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68297' 00:19:09.746 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 68297 00:19:09.746 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 68297 00:19:09.746 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:09.746 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:09.746 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:09.746 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:19:09.747 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:19:09.747 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:09.747 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:19:09.747 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:09.747 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:09.747 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:09.747 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:09.747 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:09.747 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:10.005 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:10.005 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:10.005 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:10.005 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:10.005 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:10.005 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:10.005 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:10.005 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:10.005 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:10.005 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:10.005 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:10.005 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:10.005 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:10.005 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:19:10.005 00:19:10.005 real 0m25.331s 00:19:10.005 user 0m42.600s 00:19:10.005 sys 0m4.450s 00:19:10.005 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:10.005 ************************************ 00:19:10.005 END TEST nvmf_zcopy 00:19:10.005 ************************************ 00:19:10.005 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:19:10.006 12:18:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:19:10.006 12:18:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:10.006 12:18:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:10.006 12:18:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:19:10.006 ************************************ 00:19:10.006 START TEST nvmf_nmic 00:19:10.006 ************************************ 00:19:10.006 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:19:10.006 * Looking for test storage... 00:19:10.006 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:10.006 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:10.006 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:10.006 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:19:10.266 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:10.266 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:10.266 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:10.266 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:10.266 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:19:10.266 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:19:10.266 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:19:10.266 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:19:10.266 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:19:10.266 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:19:10.266 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:19:10.266 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:10.266 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:19:10.266 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:19:10.266 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:10.266 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:10.266 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:19:10.266 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:19:10.266 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:10.266 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:19:10.266 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:19:10.266 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:19:10.266 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:19:10.266 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:10.266 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:19:10.266 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:19:10.266 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:10.266 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:10.266 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:19:10.266 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:10.266 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:10.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.266 --rc genhtml_branch_coverage=1 00:19:10.266 --rc genhtml_function_coverage=1 00:19:10.266 --rc genhtml_legend=1 00:19:10.266 --rc geninfo_all_blocks=1 00:19:10.266 --rc geninfo_unexecuted_blocks=1 00:19:10.266 00:19:10.266 ' 00:19:10.266 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:10.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.266 --rc genhtml_branch_coverage=1 00:19:10.266 --rc genhtml_function_coverage=1 00:19:10.266 --rc genhtml_legend=1 00:19:10.266 --rc geninfo_all_blocks=1 00:19:10.266 --rc geninfo_unexecuted_blocks=1 00:19:10.266 00:19:10.266 ' 00:19:10.266 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:10.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.266 --rc genhtml_branch_coverage=1 00:19:10.266 --rc genhtml_function_coverage=1 00:19:10.266 --rc genhtml_legend=1 00:19:10.266 --rc geninfo_all_blocks=1 00:19:10.266 --rc geninfo_unexecuted_blocks=1 00:19:10.266 00:19:10.266 ' 00:19:10.266 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:10.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:10.266 --rc genhtml_branch_coverage=1 00:19:10.266 --rc genhtml_function_coverage=1 00:19:10.266 --rc genhtml_legend=1 00:19:10.266 --rc geninfo_all_blocks=1 00:19:10.266 --rc geninfo_unexecuted_blocks=1 00:19:10.266 00:19:10.266 ' 00:19:10.266 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:10.266 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:19:10.266 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:10.266 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:10.266 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:10.266 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:10.266 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:10.267 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:10.267 Cannot find device "nvmf_init_br" 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:19:10.267 12:18:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:10.267 Cannot find device "nvmf_init_br2" 00:19:10.267 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:19:10.267 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:10.267 Cannot find device "nvmf_tgt_br" 00:19:10.267 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:19:10.267 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:10.267 Cannot find device "nvmf_tgt_br2" 00:19:10.267 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:19:10.267 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:10.267 Cannot find device "nvmf_init_br" 00:19:10.267 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:19:10.267 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:10.267 Cannot find device "nvmf_init_br2" 00:19:10.267 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:19:10.267 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:10.267 Cannot find device "nvmf_tgt_br" 00:19:10.267 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:19:10.267 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:10.267 Cannot find device "nvmf_tgt_br2" 00:19:10.267 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:19:10.267 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:10.267 Cannot find device "nvmf_br" 00:19:10.267 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:19:10.267 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:10.267 Cannot find device "nvmf_init_if" 00:19:10.267 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:19:10.267 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:10.267 Cannot find device "nvmf_init_if2" 00:19:10.267 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:19:10.267 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:10.267 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:10.267 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:19:10.267 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:10.267 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:10.268 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:19:10.268 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:10.268 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:10.268 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:10.268 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:10.268 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:10.268 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:10.268 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:10.526 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:10.526 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:10.526 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:10.526 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:10.526 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:10.526 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:10.526 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:10.526 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:10.526 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:10.526 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:10.526 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:10.526 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:10.526 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:10.526 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:10.527 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:10.527 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:10.527 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:10.527 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:10.527 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:10.527 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:10.527 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:10.527 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:10.527 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:10.527 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:10.527 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:10.527 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:10.527 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:10.527 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:19:10.527 00:19:10.527 --- 10.0.0.3 ping statistics --- 00:19:10.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.527 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:19:10.527 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:10.527 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:10.527 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.037 ms 00:19:10.527 00:19:10.527 --- 10.0.0.4 ping statistics --- 00:19:10.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.527 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:19:10.527 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:10.527 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:10.527 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:19:10.527 00:19:10.527 --- 10.0.0.1 ping statistics --- 00:19:10.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.527 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:19:10.527 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:10.527 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:10.527 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.033 ms 00:19:10.527 00:19:10.527 --- 10.0.0.2 ping statistics --- 00:19:10.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:10.527 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:19:10.527 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:10.527 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:19:10.527 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:10.527 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:10.527 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:10.527 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:10.527 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:10.527 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:10.527 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:10.527 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:19:10.527 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:10.527 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:10.527 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:10.527 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=68853 00:19:10.527 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 68853 00:19:10.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:10.527 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 68853 ']' 00:19:10.527 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.527 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:10.527 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:10.527 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.527 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:10.527 12:18:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:10.527 [2024-11-25 12:18:06.305462] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:19:10.527 [2024-11-25 12:18:06.305522] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:10.785 [2024-11-25 12:18:06.447466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:10.785 [2024-11-25 12:18:06.485741] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:10.785 [2024-11-25 12:18:06.485795] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:10.785 [2024-11-25 12:18:06.485801] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:10.785 [2024-11-25 12:18:06.485806] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:10.785 [2024-11-25 12:18:06.485811] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:10.785 [2024-11-25 12:18:06.486529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:10.785 [2024-11-25 12:18:06.486811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:10.785 [2024-11-25 12:18:06.486983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:10.785 [2024-11-25 12:18:06.486984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.350 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:11.350 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:19:11.350 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:11.350 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:11.350 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:11.350 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:11.350 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:11.350 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.350 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:11.350 [2024-11-25 12:18:07.183904] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:11.350 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.350 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:11.351 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.351 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:11.351 Malloc0 00:19:11.351 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.351 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:11.351 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.351 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:11.609 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.609 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:11.609 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.609 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:11.609 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.609 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:11.609 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.609 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:11.609 [2024-11-25 12:18:07.247369] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:11.609 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.609 test case1: single bdev can't be used in multiple subsystems 00:19:11.609 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:19:11.610 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:19:11.610 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.610 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:11.610 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.610 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:19:11.610 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.610 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:11.610 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.610 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:19:11.610 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:19:11.610 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.610 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:11.610 [2024-11-25 12:18:07.271281] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:19:11.610 [2024-11-25 12:18:07.271307] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:19:11.610 [2024-11-25 12:18:07.271312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:11.610 2024/11/25 12:18:07 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:19:11.610 request: 00:19:11.610 { 00:19:11.610 "method": "nvmf_subsystem_add_ns", 00:19:11.610 "params": { 00:19:11.610 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:19:11.610 "namespace": { 00:19:11.610 "bdev_name": "Malloc0", 00:19:11.610 "no_auto_visible": false 00:19:11.610 } 00:19:11.610 } 00:19:11.610 } 00:19:11.610 Got JSON-RPC error response 00:19:11.610 GoRPCClient: error on JSON-RPC call 00:19:11.610 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:11.610 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:19:11.610 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:19:11.610 Adding namespace failed - expected result. 00:19:11.610 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:19:11.610 test case2: host connect to nvmf target in multiple paths 00:19:11.610 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:19:11.610 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:11.610 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.610 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:11.610 [2024-11-25 12:18:07.283352] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:11.610 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.610 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid=5934b511-67a7-4c73-84f0-52fdf36bb79d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:19:11.610 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid=5934b511-67a7-4c73-84f0-52fdf36bb79d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:19:11.868 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:19:11.868 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:19:11.868 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:11.868 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:11.868 12:18:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:19:13.768 12:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:13.768 12:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:13.768 12:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:13.768 12:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:13.768 12:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:13.768 12:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:19:13.768 12:18:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:14.026 [global] 00:19:14.026 thread=1 00:19:14.026 invalidate=1 00:19:14.026 rw=write 00:19:14.026 time_based=1 00:19:14.026 runtime=1 00:19:14.026 ioengine=libaio 00:19:14.026 direct=1 00:19:14.026 bs=4096 00:19:14.026 iodepth=1 00:19:14.026 norandommap=0 00:19:14.026 numjobs=1 00:19:14.026 00:19:14.026 verify_dump=1 00:19:14.026 verify_backlog=512 00:19:14.026 verify_state_save=0 00:19:14.026 do_verify=1 00:19:14.026 verify=crc32c-intel 00:19:14.026 [job0] 00:19:14.026 filename=/dev/nvme0n1 00:19:14.026 Could not set queue depth (nvme0n1) 00:19:14.026 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:14.026 fio-3.35 00:19:14.026 Starting 1 thread 00:19:15.400 00:19:15.400 job0: (groupid=0, jobs=1): err= 0: pid=68961: Mon Nov 25 12:18:10 2024 00:19:15.400 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:19:15.400 slat (nsec): min=7548, max=67620, avg=9778.45, stdev=4421.53 00:19:15.400 clat (usec): min=80, max=491, avg=105.78, stdev=14.76 00:19:15.400 lat (usec): min=89, max=499, avg=115.56, stdev=16.31 00:19:15.400 clat percentiles (usec): 00:19:15.400 | 1.00th=[ 87], 5.00th=[ 90], 10.00th=[ 92], 20.00th=[ 96], 00:19:15.400 | 30.00th=[ 99], 40.00th=[ 103], 50.00th=[ 106], 60.00th=[ 109], 00:19:15.400 | 70.00th=[ 112], 80.00th=[ 114], 90.00th=[ 119], 95.00th=[ 123], 00:19:15.400 | 99.00th=[ 135], 99.50th=[ 141], 99.90th=[ 306], 99.95th=[ 375], 00:19:15.400 | 99.99th=[ 494] 00:19:15.400 write: IOPS=4698, BW=18.4MiB/s (19.2MB/s)(18.4MiB/1001msec); 0 zone resets 00:19:15.400 slat (usec): min=11, max=123, avg=15.84, stdev= 7.12 00:19:15.400 clat (usec): min=60, max=417, avg=81.54, stdev=14.33 00:19:15.400 lat (usec): min=73, max=452, avg=97.37, stdev=18.37 00:19:15.400 clat percentiles (usec): 00:19:15.400 | 1.00th=[ 66], 5.00th=[ 68], 10.00th=[ 70], 20.00th=[ 73], 00:19:15.400 | 30.00th=[ 77], 40.00th=[ 79], 50.00th=[ 81], 60.00th=[ 83], 00:19:15.400 | 70.00th=[ 86], 80.00th=[ 88], 90.00th=[ 93], 95.00th=[ 98], 00:19:15.400 | 99.00th=[ 114], 99.50th=[ 133], 99.90th=[ 322], 99.95th=[ 404], 00:19:15.400 | 99.99th=[ 416] 00:19:15.400 bw ( KiB/s): min=18224, max=18224, per=96.97%, avg=18224.00, stdev= 0.00, samples=1 00:19:15.400 iops : min= 4556, max= 4556, avg=4556.00, stdev= 0.00, samples=1 00:19:15.400 lat (usec) : 100=65.11%, 250=34.78%, 500=0.12% 00:19:15.400 cpu : usr=1.60%, sys=8.70%, ctx=9311, majf=0, minf=5 00:19:15.400 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:15.400 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.400 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:15.400 issued rwts: total=4608,4703,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:15.400 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:15.400 00:19:15.400 Run status group 0 (all jobs): 00:19:15.400 READ: bw=18.0MiB/s (18.9MB/s), 18.0MiB/s-18.0MiB/s (18.9MB/s-18.9MB/s), io=18.0MiB (18.9MB), run=1001-1001msec 00:19:15.400 WRITE: bw=18.4MiB/s (19.2MB/s), 18.4MiB/s-18.4MiB/s (19.2MB/s-19.2MB/s), io=18.4MiB (19.3MB), run=1001-1001msec 00:19:15.400 00:19:15.400 Disk stats (read/write): 00:19:15.400 nvme0n1: ios=4146/4148, merge=0/0, ticks=464/378, in_queue=842, util=90.98% 00:19:15.400 12:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:15.400 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:19:15.400 12:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:15.400 12:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:19:15.400 12:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:15.400 12:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:15.400 12:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:15.400 12:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:15.400 12:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:19:15.400 12:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:15.400 12:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:19:15.400 12:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:15.400 12:18:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:19:15.400 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:15.400 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:19:15.400 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:15.400 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:15.400 rmmod nvme_tcp 00:19:15.400 rmmod nvme_fabrics 00:19:15.400 rmmod nvme_keyring 00:19:15.400 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:15.400 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:19:15.400 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:19:15.400 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 68853 ']' 00:19:15.400 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 68853 00:19:15.400 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 68853 ']' 00:19:15.400 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 68853 00:19:15.400 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:19:15.400 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:15.400 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68853 00:19:15.400 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:15.400 killing process with pid 68853 00:19:15.400 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:15.400 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68853' 00:19:15.400 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 68853 00:19:15.400 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 68853 00:19:15.400 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:15.400 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:15.400 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:15.400 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:19:15.400 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:19:15.400 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:15.400 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:19:15.400 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:15.400 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:15.400 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:15.401 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:15.401 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:15.401 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:15.401 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:15.401 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:15.401 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:15.401 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:15.659 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:15.659 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:15.659 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:15.659 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:15.659 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:15.659 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:15.659 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:15.659 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:15.659 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:15.659 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:19:15.659 00:19:15.659 real 0m5.597s 00:19:15.659 user 0m18.655s 00:19:15.659 sys 0m1.113s 00:19:15.659 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:15.659 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:15.659 ************************************ 00:19:15.659 END TEST nvmf_nmic 00:19:15.659 ************************************ 00:19:15.659 12:18:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:15.659 12:18:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:15.659 12:18:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:15.659 12:18:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:19:15.659 ************************************ 00:19:15.659 START TEST nvmf_fio_target 00:19:15.659 ************************************ 00:19:15.659 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:15.659 * Looking for test storage... 00:19:15.659 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:15.659 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:15.659 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:19:15.659 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:15.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.919 --rc genhtml_branch_coverage=1 00:19:15.919 --rc genhtml_function_coverage=1 00:19:15.919 --rc genhtml_legend=1 00:19:15.919 --rc geninfo_all_blocks=1 00:19:15.919 --rc geninfo_unexecuted_blocks=1 00:19:15.919 00:19:15.919 ' 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:15.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.919 --rc genhtml_branch_coverage=1 00:19:15.919 --rc genhtml_function_coverage=1 00:19:15.919 --rc genhtml_legend=1 00:19:15.919 --rc geninfo_all_blocks=1 00:19:15.919 --rc geninfo_unexecuted_blocks=1 00:19:15.919 00:19:15.919 ' 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:15.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.919 --rc genhtml_branch_coverage=1 00:19:15.919 --rc genhtml_function_coverage=1 00:19:15.919 --rc genhtml_legend=1 00:19:15.919 --rc geninfo_all_blocks=1 00:19:15.919 --rc geninfo_unexecuted_blocks=1 00:19:15.919 00:19:15.919 ' 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:15.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:15.919 --rc genhtml_branch_coverage=1 00:19:15.919 --rc genhtml_function_coverage=1 00:19:15.919 --rc genhtml_legend=1 00:19:15.919 --rc geninfo_all_blocks=1 00:19:15.919 --rc geninfo_unexecuted_blocks=1 00:19:15.919 00:19:15.919 ' 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.919 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:15.920 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:15.920 Cannot find device "nvmf_init_br" 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:15.920 Cannot find device "nvmf_init_br2" 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:15.920 Cannot find device "nvmf_tgt_br" 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:15.920 Cannot find device "nvmf_tgt_br2" 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:15.920 Cannot find device "nvmf_init_br" 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:15.920 Cannot find device "nvmf_init_br2" 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:15.920 Cannot find device "nvmf_tgt_br" 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:15.920 Cannot find device "nvmf_tgt_br2" 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:15.920 Cannot find device "nvmf_br" 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:15.920 Cannot find device "nvmf_init_if" 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:15.920 Cannot find device "nvmf_init_if2" 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:15.920 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:15.920 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:15.920 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:16.179 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:16.179 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:16.179 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:16.179 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:16.179 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:16.179 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:16.179 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:16.179 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:16.179 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:16.179 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:16.179 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:16.179 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:16.179 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:16.179 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:16.179 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:16.179 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:16.179 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:16.179 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:16.179 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:16.179 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:16.179 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:16.179 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:16.179 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:16.179 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:16.179 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:16.179 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:16.179 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:16.179 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:16.179 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.102 ms 00:19:16.179 00:19:16.179 --- 10.0.0.3 ping statistics --- 00:19:16.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.179 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:19:16.179 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:16.179 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:16.179 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.089 ms 00:19:16.179 00:19:16.179 --- 10.0.0.4 ping statistics --- 00:19:16.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.179 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:19:16.179 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:16.179 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:16.179 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:19:16.179 00:19:16.179 --- 10.0.0.1 ping statistics --- 00:19:16.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.179 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:19:16.179 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:16.179 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:16.179 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:19:16.179 00:19:16.179 --- 10.0.0.2 ping statistics --- 00:19:16.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:16.180 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:19:16.180 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:16.180 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:19:16.180 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:16.180 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:16.180 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:16.180 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:16.180 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:16.180 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:16.180 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:16.180 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:19:16.180 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:16.180 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:16.180 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.180 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=69188 00:19:16.180 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 69188 00:19:16.180 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 69188 ']' 00:19:16.180 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:16.180 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:16.180 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:16.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:16.180 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:16.180 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:16.180 12:18:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.180 [2024-11-25 12:18:12.016971] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:19:16.180 [2024-11-25 12:18:12.017028] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:16.437 [2024-11-25 12:18:12.155870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:16.437 [2024-11-25 12:18:12.207202] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:16.437 [2024-11-25 12:18:12.207461] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:16.437 [2024-11-25 12:18:12.207585] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:16.437 [2024-11-25 12:18:12.207673] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:16.437 [2024-11-25 12:18:12.207703] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:16.437 [2024-11-25 12:18:12.208731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:16.437 [2024-11-25 12:18:12.208879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:16.437 [2024-11-25 12:18:12.208884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:16.437 [2024-11-25 12:18:12.208805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:17.002 12:18:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:17.002 12:18:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:19:17.002 12:18:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:17.002 12:18:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:17.002 12:18:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.259 12:18:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:17.259 12:18:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:17.259 [2024-11-25 12:18:13.039147] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:17.259 12:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:17.517 12:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:19:17.517 12:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:17.775 12:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:19:17.775 12:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:18.033 12:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:19:18.033 12:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:18.293 12:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:19:18.293 12:18:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:19:18.622 12:18:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:18.622 12:18:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:19:18.622 12:18:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:18.880 12:18:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:19:18.881 12:18:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:19.139 12:18:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:19:19.139 12:18:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:19:19.397 12:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:19.397 12:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:19.397 12:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:19.655 12:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:19.655 12:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:19.912 12:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:20.171 [2024-11-25 12:18:15.816959] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:20.171 12:18:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:19:20.171 12:18:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:19:20.430 12:18:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid=5934b511-67a7-4c73-84f0-52fdf36bb79d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:19:20.688 12:18:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:19:20.688 12:18:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:19:20.688 12:18:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:20.688 12:18:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:19:20.688 12:18:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:19:20.688 12:18:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:19:22.585 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:22.585 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:22.585 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:22.585 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:19:22.585 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:22.585 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:19:22.585 12:18:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:22.585 [global] 00:19:22.585 thread=1 00:19:22.585 invalidate=1 00:19:22.585 rw=write 00:19:22.585 time_based=1 00:19:22.585 runtime=1 00:19:22.585 ioengine=libaio 00:19:22.585 direct=1 00:19:22.585 bs=4096 00:19:22.585 iodepth=1 00:19:22.585 norandommap=0 00:19:22.585 numjobs=1 00:19:22.585 00:19:22.585 verify_dump=1 00:19:22.585 verify_backlog=512 00:19:22.585 verify_state_save=0 00:19:22.585 do_verify=1 00:19:22.585 verify=crc32c-intel 00:19:22.585 [job0] 00:19:22.585 filename=/dev/nvme0n1 00:19:22.585 [job1] 00:19:22.585 filename=/dev/nvme0n2 00:19:22.585 [job2] 00:19:22.585 filename=/dev/nvme0n3 00:19:22.585 [job3] 00:19:22.585 filename=/dev/nvme0n4 00:19:22.843 Could not set queue depth (nvme0n1) 00:19:22.843 Could not set queue depth (nvme0n2) 00:19:22.843 Could not set queue depth (nvme0n3) 00:19:22.843 Could not set queue depth (nvme0n4) 00:19:22.843 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:22.843 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:22.843 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:22.843 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:22.843 fio-3.35 00:19:22.843 Starting 4 threads 00:19:24.217 00:19:24.217 job0: (groupid=0, jobs=1): err= 0: pid=69474: Mon Nov 25 12:18:19 2024 00:19:24.217 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:19:24.217 slat (nsec): min=5743, max=74377, avg=6900.25, stdev=2841.58 00:19:24.217 clat (usec): min=75, max=283, avg=96.57, stdev=11.94 00:19:24.217 lat (usec): min=81, max=301, avg=103.47, stdev=12.48 00:19:24.217 clat percentiles (usec): 00:19:24.217 | 1.00th=[ 81], 5.00th=[ 84], 10.00th=[ 86], 20.00th=[ 88], 00:19:24.217 | 30.00th=[ 90], 40.00th=[ 92], 50.00th=[ 94], 60.00th=[ 97], 00:19:24.217 | 70.00th=[ 100], 80.00th=[ 104], 90.00th=[ 112], 95.00th=[ 119], 00:19:24.217 | 99.00th=[ 135], 99.50th=[ 143], 99.90th=[ 165], 99.95th=[ 265], 00:19:24.217 | 99.99th=[ 285] 00:19:24.217 write: IOPS=5121, BW=20.0MiB/s (21.0MB/s)(20.0MiB/1001msec); 0 zone resets 00:19:24.217 slat (nsec): min=9276, max=99344, avg=11621.52, stdev=4898.53 00:19:24.217 clat (usec): min=57, max=367, avg=78.32, stdev=13.75 00:19:24.217 lat (usec): min=67, max=377, avg=89.94, stdev=15.40 00:19:24.217 clat percentiles (usec): 00:19:24.217 | 1.00th=[ 63], 5.00th=[ 66], 10.00th=[ 68], 20.00th=[ 70], 00:19:24.217 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 76], 60.00th=[ 78], 00:19:24.217 | 70.00th=[ 81], 80.00th=[ 85], 90.00th=[ 93], 95.00th=[ 101], 00:19:24.217 | 99.00th=[ 121], 99.50th=[ 129], 99.90th=[ 225], 99.95th=[ 262], 00:19:24.217 | 99.99th=[ 367] 00:19:24.217 bw ( KiB/s): min=20480, max=20480, per=25.02%, avg=20480.00, stdev= 0.00, samples=1 00:19:24.217 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:19:24.217 lat (usec) : 100=82.98%, 250=16.96%, 500=0.06% 00:19:24.217 cpu : usr=1.50%, sys=7.80%, ctx=10249, majf=0, minf=13 00:19:24.217 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:24.217 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.217 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.217 issued rwts: total=5120,5127,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:24.217 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:24.217 job1: (groupid=0, jobs=1): err= 0: pid=69475: Mon Nov 25 12:18:19 2024 00:19:24.217 read: IOPS=5148, BW=20.1MiB/s (21.1MB/s)(20.1MiB/1001msec) 00:19:24.217 slat (nsec): min=5659, max=27013, avg=6425.42, stdev=952.43 00:19:24.217 clat (usec): min=71, max=3602, avg=94.06, stdev=95.08 00:19:24.217 lat (usec): min=79, max=3609, avg=100.48, stdev=95.38 00:19:24.217 clat percentiles (usec): 00:19:24.217 | 1.00th=[ 78], 5.00th=[ 80], 10.00th=[ 82], 20.00th=[ 84], 00:19:24.217 | 30.00th=[ 86], 40.00th=[ 87], 50.00th=[ 89], 60.00th=[ 90], 00:19:24.217 | 70.00th=[ 93], 80.00th=[ 96], 90.00th=[ 102], 95.00th=[ 110], 00:19:24.217 | 99.00th=[ 127], 99.50th=[ 139], 99.90th=[ 1418], 99.95th=[ 2868], 00:19:24.217 | 99.99th=[ 3589] 00:19:24.217 write: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec); 0 zone resets 00:19:24.217 slat (nsec): min=9381, max=92907, avg=10562.03, stdev=3555.73 00:19:24.217 clat (usec): min=48, max=150, avg=73.50, stdev= 9.72 00:19:24.217 lat (usec): min=65, max=204, avg=84.06, stdev=10.63 00:19:24.217 clat percentiles (usec): 00:19:24.217 | 1.00th=[ 61], 5.00th=[ 64], 10.00th=[ 65], 20.00th=[ 67], 00:19:24.217 | 30.00th=[ 69], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 74], 00:19:24.217 | 70.00th=[ 76], 80.00th=[ 79], 90.00th=[ 86], 95.00th=[ 93], 00:19:24.217 | 99.00th=[ 110], 99.50th=[ 116], 99.90th=[ 131], 99.95th=[ 137], 00:19:24.217 | 99.99th=[ 151] 00:19:24.217 bw ( KiB/s): min=23424, max=23424, per=28.61%, avg=23424.00, stdev= 0.00, samples=1 00:19:24.218 iops : min= 5856, max= 5856, avg=5856.00, stdev= 0.00, samples=1 00:19:24.218 lat (usec) : 50=0.01%, 100=92.36%, 250=7.53%, 500=0.03% 00:19:24.218 lat (msec) : 2=0.03%, 4=0.05% 00:19:24.218 cpu : usr=1.60%, sys=7.40%, ctx=10786, majf=0, minf=9 00:19:24.218 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:24.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.218 issued rwts: total=5154,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:24.218 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:24.218 job2: (groupid=0, jobs=1): err= 0: pid=69476: Mon Nov 25 12:18:19 2024 00:19:24.218 read: IOPS=4101, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:19:24.218 slat (nsec): min=6152, max=59463, avg=7271.06, stdev=2674.72 00:19:24.218 clat (usec): min=78, max=401, avg=113.59, stdev=15.96 00:19:24.218 lat (usec): min=88, max=408, avg=120.86, stdev=16.12 00:19:24.218 clat percentiles (usec): 00:19:24.218 | 1.00th=[ 87], 5.00th=[ 91], 10.00th=[ 93], 20.00th=[ 97], 00:19:24.218 | 30.00th=[ 105], 40.00th=[ 113], 50.00th=[ 116], 60.00th=[ 119], 00:19:24.218 | 70.00th=[ 122], 80.00th=[ 126], 90.00th=[ 133], 95.00th=[ 137], 00:19:24.218 | 99.00th=[ 149], 99.50th=[ 153], 99.90th=[ 225], 99.95th=[ 239], 00:19:24.218 | 99.99th=[ 404] 00:19:24.218 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:19:24.218 slat (nsec): min=7037, max=92017, avg=11763.28, stdev=5485.04 00:19:24.218 clat (usec): min=66, max=1240, avg=95.88, stdev=31.56 00:19:24.218 lat (usec): min=75, max=1251, avg=107.64, stdev=32.27 00:19:24.218 clat percentiles (usec): 00:19:24.218 | 1.00th=[ 71], 5.00th=[ 74], 10.00th=[ 77], 20.00th=[ 81], 00:19:24.218 | 30.00th=[ 87], 40.00th=[ 91], 50.00th=[ 94], 60.00th=[ 97], 00:19:24.218 | 70.00th=[ 100], 80.00th=[ 104], 90.00th=[ 112], 95.00th=[ 118], 00:19:24.218 | 99.00th=[ 165], 99.50th=[ 297], 99.90th=[ 490], 99.95th=[ 545], 00:19:24.218 | 99.99th=[ 1237] 00:19:24.218 bw ( KiB/s): min=16608, max=16608, per=20.29%, avg=16608.00, stdev= 0.00, samples=1 00:19:24.218 iops : min= 4152, max= 4152, avg=4152.00, stdev= 0.00, samples=1 00:19:24.218 lat (usec) : 100=48.75%, 250=50.85%, 500=0.36%, 750=0.03% 00:19:24.218 lat (msec) : 2=0.01% 00:19:24.218 cpu : usr=1.70%, sys=6.40%, ctx=8714, majf=0, minf=14 00:19:24.218 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:24.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.218 issued rwts: total=4106,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:24.218 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:24.218 job3: (groupid=0, jobs=1): err= 0: pid=69477: Mon Nov 25 12:18:19 2024 00:19:24.218 read: IOPS=4805, BW=18.8MiB/s (19.7MB/s)(18.8MiB/1001msec) 00:19:24.218 slat (nsec): min=4928, max=92648, avg=7191.69, stdev=3693.47 00:19:24.218 clat (usec): min=81, max=407, avg=99.02, stdev=13.50 00:19:24.218 lat (usec): min=86, max=412, avg=106.21, stdev=14.24 00:19:24.218 clat percentiles (usec): 00:19:24.218 | 1.00th=[ 86], 5.00th=[ 88], 10.00th=[ 90], 20.00th=[ 91], 00:19:24.218 | 30.00th=[ 93], 40.00th=[ 95], 50.00th=[ 96], 60.00th=[ 98], 00:19:24.218 | 70.00th=[ 101], 80.00th=[ 105], 90.00th=[ 114], 95.00th=[ 121], 00:19:24.218 | 99.00th=[ 137], 99.50th=[ 143], 99.90th=[ 306], 99.95th=[ 326], 00:19:24.218 | 99.99th=[ 408] 00:19:24.218 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:19:24.218 slat (usec): min=7, max=111, avg=11.76, stdev= 6.45 00:19:24.218 clat (usec): min=59, max=397, avg=82.08, stdev=13.48 00:19:24.218 lat (usec): min=72, max=408, avg=93.84, stdev=16.19 00:19:24.218 clat percentiles (usec): 00:19:24.218 | 1.00th=[ 68], 5.00th=[ 71], 10.00th=[ 73], 20.00th=[ 75], 00:19:24.218 | 30.00th=[ 76], 40.00th=[ 78], 50.00th=[ 80], 60.00th=[ 82], 00:19:24.218 | 70.00th=[ 85], 80.00th=[ 89], 90.00th=[ 95], 95.00th=[ 102], 00:19:24.218 | 99.00th=[ 118], 99.50th=[ 128], 99.90th=[ 235], 99.95th=[ 351], 00:19:24.218 | 99.99th=[ 400] 00:19:24.218 bw ( KiB/s): min=20480, max=20480, per=25.02%, avg=20480.00, stdev= 0.00, samples=1 00:19:24.218 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:19:24.218 lat (usec) : 100=81.30%, 250=18.60%, 500=0.10% 00:19:24.218 cpu : usr=2.20%, sys=7.00%, ctx=9930, majf=0, minf=11 00:19:24.218 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:24.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.218 issued rwts: total=4810,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:24.218 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:24.218 00:19:24.218 Run status group 0 (all jobs): 00:19:24.218 READ: bw=74.9MiB/s (78.5MB/s), 16.0MiB/s-20.1MiB/s (16.8MB/s-21.1MB/s), io=75.0MiB (78.6MB), run=1001-1001msec 00:19:24.218 WRITE: bw=79.9MiB/s (83.8MB/s), 18.0MiB/s-22.0MiB/s (18.9MB/s-23.0MB/s), io=80.0MiB (83.9MB), run=1001-1001msec 00:19:24.218 00:19:24.218 Disk stats (read/write): 00:19:24.218 nvme0n1: ios=4315/4608, merge=0/0, ticks=438/390, in_queue=828, util=87.98% 00:19:24.218 nvme0n2: ios=4657/4692, merge=0/0, ticks=458/362, in_queue=820, util=88.07% 00:19:24.218 nvme0n3: ios=3584/3794, merge=0/0, ticks=421/388, in_queue=809, util=89.39% 00:19:24.218 nvme0n4: ios=4096/4461, merge=0/0, ticks=415/394, in_queue=809, util=89.86% 00:19:24.218 12:18:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:24.218 [global] 00:19:24.218 thread=1 00:19:24.218 invalidate=1 00:19:24.218 rw=randwrite 00:19:24.218 time_based=1 00:19:24.218 runtime=1 00:19:24.218 ioengine=libaio 00:19:24.218 direct=1 00:19:24.218 bs=4096 00:19:24.218 iodepth=1 00:19:24.218 norandommap=0 00:19:24.218 numjobs=1 00:19:24.218 00:19:24.218 verify_dump=1 00:19:24.218 verify_backlog=512 00:19:24.218 verify_state_save=0 00:19:24.218 do_verify=1 00:19:24.218 verify=crc32c-intel 00:19:24.218 [job0] 00:19:24.218 filename=/dev/nvme0n1 00:19:24.218 [job1] 00:19:24.218 filename=/dev/nvme0n2 00:19:24.218 [job2] 00:19:24.218 filename=/dev/nvme0n3 00:19:24.218 [job3] 00:19:24.218 filename=/dev/nvme0n4 00:19:24.218 Could not set queue depth (nvme0n1) 00:19:24.218 Could not set queue depth (nvme0n2) 00:19:24.218 Could not set queue depth (nvme0n3) 00:19:24.218 Could not set queue depth (nvme0n4) 00:19:24.218 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:24.218 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:24.218 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:24.218 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:24.218 fio-3.35 00:19:24.218 Starting 4 threads 00:19:25.586 00:19:25.586 job0: (groupid=0, jobs=1): err= 0: pid=69536: Mon Nov 25 12:18:21 2024 00:19:25.586 read: IOPS=3164, BW=12.4MiB/s (13.0MB/s)(12.4MiB/1001msec) 00:19:25.586 slat (usec): min=3, max=2900, avg= 7.30, stdev=51.50 00:19:25.586 clat (usec): min=25, max=4020, avg=156.47, stdev=77.78 00:19:25.586 lat (usec): min=81, max=4026, avg=163.77, stdev=92.03 00:19:25.586 clat percentiles (usec): 00:19:25.586 | 1.00th=[ 114], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 147], 00:19:25.587 | 30.00th=[ 149], 40.00th=[ 151], 50.00th=[ 153], 60.00th=[ 155], 00:19:25.587 | 70.00th=[ 157], 80.00th=[ 161], 90.00th=[ 169], 95.00th=[ 178], 00:19:25.587 | 99.00th=[ 212], 99.50th=[ 225], 99.90th=[ 461], 99.95th=[ 1942], 00:19:25.587 | 99.99th=[ 4015] 00:19:25.587 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:19:25.587 slat (nsec): min=5884, max=93401, avg=11236.17, stdev=4857.41 00:19:25.587 clat (usec): min=46, max=567, avg=120.59, stdev=14.35 00:19:25.587 lat (usec): min=65, max=578, avg=131.83, stdev=13.73 00:19:25.587 clat percentiles (usec): 00:19:25.587 | 1.00th=[ 88], 5.00th=[ 108], 10.00th=[ 111], 20.00th=[ 114], 00:19:25.587 | 30.00th=[ 116], 40.00th=[ 118], 50.00th=[ 119], 60.00th=[ 121], 00:19:25.587 | 70.00th=[ 124], 80.00th=[ 128], 90.00th=[ 135], 95.00th=[ 141], 00:19:25.587 | 99.00th=[ 161], 99.50th=[ 178], 99.90th=[ 200], 99.95th=[ 249], 00:19:25.587 | 99.99th=[ 570] 00:19:25.587 bw ( KiB/s): min=14560, max=14560, per=20.92%, avg=14560.00, stdev= 0.00, samples=1 00:19:25.587 iops : min= 3640, max= 3640, avg=3640.00, stdev= 0.00, samples=1 00:19:25.587 lat (usec) : 50=0.03%, 100=1.53%, 250=98.30%, 500=0.09%, 750=0.03% 00:19:25.587 lat (msec) : 2=0.01%, 10=0.01% 00:19:25.587 cpu : usr=0.70%, sys=5.50%, ctx=6763, majf=0, minf=7 00:19:25.587 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:25.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.587 issued rwts: total=3168,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:25.587 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:25.587 job1: (groupid=0, jobs=1): err= 0: pid=69537: Mon Nov 25 12:18:21 2024 00:19:25.587 read: IOPS=4683, BW=18.3MiB/s (19.2MB/s)(18.3MiB/1001msec) 00:19:25.587 slat (nsec): min=5983, max=51302, avg=6683.79, stdev=1667.24 00:19:25.587 clat (usec): min=56, max=1377, avg=103.01, stdev=30.14 00:19:25.587 lat (usec): min=80, max=1385, avg=109.70, stdev=30.28 00:19:25.587 clat percentiles (usec): 00:19:25.587 | 1.00th=[ 80], 5.00th=[ 82], 10.00th=[ 84], 20.00th=[ 87], 00:19:25.587 | 30.00th=[ 92], 40.00th=[ 100], 50.00th=[ 103], 60.00th=[ 106], 00:19:25.587 | 70.00th=[ 109], 80.00th=[ 113], 90.00th=[ 118], 95.00th=[ 124], 00:19:25.587 | 99.00th=[ 147], 99.50th=[ 281], 99.90th=[ 334], 99.95th=[ 400], 00:19:25.587 | 99.99th=[ 1385] 00:19:25.587 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:19:25.587 slat (usec): min=9, max=100, avg=11.13, stdev= 4.66 00:19:25.587 clat (usec): min=56, max=416, avg=82.19, stdev=16.19 00:19:25.587 lat (usec): min=66, max=426, avg=93.33, stdev=16.98 00:19:25.587 clat percentiles (usec): 00:19:25.587 | 1.00th=[ 63], 5.00th=[ 65], 10.00th=[ 68], 20.00th=[ 71], 00:19:25.587 | 30.00th=[ 76], 40.00th=[ 80], 50.00th=[ 83], 60.00th=[ 85], 00:19:25.587 | 70.00th=[ 87], 80.00th=[ 90], 90.00th=[ 95], 95.00th=[ 101], 00:19:25.587 | 99.00th=[ 117], 99.50th=[ 137], 99.90th=[ 281], 99.95th=[ 338], 00:19:25.587 | 99.99th=[ 416] 00:19:25.587 bw ( KiB/s): min=20624, max=20624, per=29.63%, avg=20624.00, stdev= 0.00, samples=1 00:19:25.587 iops : min= 5156, max= 5156, avg=5156.00, stdev= 0.00, samples=1 00:19:25.587 lat (usec) : 100=68.53%, 250=31.01%, 500=0.45%, 1000=0.01% 00:19:25.587 lat (msec) : 2=0.01% 00:19:25.587 cpu : usr=1.70%, sys=6.80%, ctx=9808, majf=0, minf=11 00:19:25.587 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:25.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.587 issued rwts: total=4688,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:25.587 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:25.587 job2: (groupid=0, jobs=1): err= 0: pid=69538: Mon Nov 25 12:18:21 2024 00:19:25.587 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:19:25.587 slat (nsec): min=5946, max=67816, avg=7239.33, stdev=3197.09 00:19:25.587 clat (usec): min=77, max=1487, avg=95.80, stdev=22.87 00:19:25.587 lat (usec): min=83, max=1493, avg=103.04, stdev=23.20 00:19:25.587 clat percentiles (usec): 00:19:25.587 | 1.00th=[ 84], 5.00th=[ 86], 10.00th=[ 88], 20.00th=[ 90], 00:19:25.587 | 30.00th=[ 91], 40.00th=[ 93], 50.00th=[ 94], 60.00th=[ 95], 00:19:25.587 | 70.00th=[ 97], 80.00th=[ 100], 90.00th=[ 106], 95.00th=[ 112], 00:19:25.587 | 99.00th=[ 131], 99.50th=[ 135], 99.90th=[ 186], 99.95th=[ 445], 00:19:25.587 | 99.99th=[ 1483] 00:19:25.587 write: IOPS=5126, BW=20.0MiB/s (21.0MB/s)(20.0MiB/1001msec); 0 zone resets 00:19:25.587 slat (nsec): min=9333, max=89195, avg=11510.97, stdev=5518.33 00:19:25.587 clat (usec): min=59, max=452, avg=78.79, stdev=11.43 00:19:25.587 lat (usec): min=72, max=462, avg=90.30, stdev=14.08 00:19:25.587 clat percentiles (usec): 00:19:25.587 | 1.00th=[ 68], 5.00th=[ 70], 10.00th=[ 71], 20.00th=[ 73], 00:19:25.587 | 30.00th=[ 74], 40.00th=[ 76], 50.00th=[ 77], 60.00th=[ 79], 00:19:25.587 | 70.00th=[ 81], 80.00th=[ 84], 90.00th=[ 90], 95.00th=[ 96], 00:19:25.587 | 99.00th=[ 115], 99.50th=[ 127], 99.90th=[ 174], 99.95th=[ 251], 00:19:25.587 | 99.99th=[ 453] 00:19:25.587 bw ( KiB/s): min=20480, max=20480, per=29.42%, avg=20480.00, stdev= 0.00, samples=1 00:19:25.587 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:19:25.587 lat (usec) : 100=88.69%, 250=11.26%, 500=0.04%, 750=0.01% 00:19:25.587 lat (msec) : 2=0.01% 00:19:25.587 cpu : usr=1.40%, sys=8.00%, ctx=10252, majf=0, minf=11 00:19:25.587 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:25.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.587 issued rwts: total=5120,5132,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:25.587 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:25.587 job3: (groupid=0, jobs=1): err= 0: pid=69539: Mon Nov 25 12:18:21 2024 00:19:25.587 read: IOPS=3162, BW=12.4MiB/s (13.0MB/s)(12.4MiB/1001msec) 00:19:25.587 slat (nsec): min=3983, max=80170, avg=6654.54, stdev=3373.67 00:19:25.587 clat (usec): min=82, max=6895, avg=158.23, stdev=146.79 00:19:25.587 lat (usec): min=89, max=6901, avg=164.89, stdev=146.83 00:19:25.587 clat percentiles (usec): 00:19:25.587 | 1.00th=[ 129], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 145], 00:19:25.587 | 30.00th=[ 149], 40.00th=[ 151], 50.00th=[ 153], 60.00th=[ 155], 00:19:25.587 | 70.00th=[ 157], 80.00th=[ 161], 90.00th=[ 167], 95.00th=[ 178], 00:19:25.587 | 99.00th=[ 210], 99.50th=[ 229], 99.90th=[ 461], 99.95th=[ 4817], 00:19:25.587 | 99.99th=[ 6915] 00:19:25.587 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:19:25.587 slat (nsec): min=5812, max=73580, avg=11402.63, stdev=4995.83 00:19:25.587 clat (usec): min=50, max=525, avg=120.34, stdev=14.25 00:19:25.587 lat (usec): min=81, max=536, avg=131.74, stdev=13.56 00:19:25.587 clat percentiles (usec): 00:19:25.587 | 1.00th=[ 87], 5.00th=[ 106], 10.00th=[ 111], 20.00th=[ 114], 00:19:25.587 | 30.00th=[ 116], 40.00th=[ 118], 50.00th=[ 119], 60.00th=[ 121], 00:19:25.587 | 70.00th=[ 124], 80.00th=[ 127], 90.00th=[ 133], 95.00th=[ 141], 00:19:25.587 | 99.00th=[ 165], 99.50th=[ 180], 99.90th=[ 235], 99.95th=[ 281], 00:19:25.587 | 99.99th=[ 529] 00:19:25.587 bw ( KiB/s): min=14512, max=14512, per=20.85%, avg=14512.00, stdev= 0.00, samples=1 00:19:25.587 iops : min= 3628, max= 3628, avg=3628.00, stdev= 0.00, samples=1 00:19:25.587 lat (usec) : 100=1.42%, 250=98.41%, 500=0.10%, 750=0.03% 00:19:25.587 lat (msec) : 10=0.03% 00:19:25.587 cpu : usr=0.60%, sys=5.60%, ctx=6770, majf=0, minf=19 00:19:25.587 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:25.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.587 issued rwts: total=3166,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:25.587 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:25.587 00:19:25.587 Run status group 0 (all jobs): 00:19:25.587 READ: bw=63.0MiB/s (66.1MB/s), 12.4MiB/s-20.0MiB/s (13.0MB/s-20.9MB/s), io=63.1MiB (66.1MB), run=1001-1001msec 00:19:25.587 WRITE: bw=68.0MiB/s (71.3MB/s), 14.0MiB/s-20.0MiB/s (14.7MB/s-21.0MB/s), io=68.0MiB (71.4MB), run=1001-1001msec 00:19:25.587 00:19:25.587 Disk stats (read/write): 00:19:25.587 nvme0n1: ios=2896/3072, merge=0/0, ticks=453/388, in_queue=841, util=88.98% 00:19:25.587 nvme0n2: ios=4147/4608, merge=0/0, ticks=433/389, in_queue=822, util=90.14% 00:19:25.587 nvme0n3: ios=4451/4608, merge=0/0, ticks=449/372, in_queue=821, util=90.03% 00:19:25.587 nvme0n4: ios=2843/3072, merge=0/0, ticks=455/386, in_queue=841, util=89.90% 00:19:25.587 12:18:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:25.587 [global] 00:19:25.587 thread=1 00:19:25.587 invalidate=1 00:19:25.587 rw=write 00:19:25.587 time_based=1 00:19:25.587 runtime=1 00:19:25.587 ioengine=libaio 00:19:25.587 direct=1 00:19:25.587 bs=4096 00:19:25.587 iodepth=128 00:19:25.587 norandommap=0 00:19:25.587 numjobs=1 00:19:25.587 00:19:25.587 verify_dump=1 00:19:25.587 verify_backlog=512 00:19:25.587 verify_state_save=0 00:19:25.587 do_verify=1 00:19:25.587 verify=crc32c-intel 00:19:25.587 [job0] 00:19:25.587 filename=/dev/nvme0n1 00:19:25.587 [job1] 00:19:25.587 filename=/dev/nvme0n2 00:19:25.587 [job2] 00:19:25.587 filename=/dev/nvme0n3 00:19:25.587 [job3] 00:19:25.587 filename=/dev/nvme0n4 00:19:25.587 Could not set queue depth (nvme0n1) 00:19:25.587 Could not set queue depth (nvme0n2) 00:19:25.587 Could not set queue depth (nvme0n3) 00:19:25.587 Could not set queue depth (nvme0n4) 00:19:25.587 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:25.587 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:25.587 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:25.587 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:25.587 fio-3.35 00:19:25.588 Starting 4 threads 00:19:26.519 00:19:26.519 job0: (groupid=0, jobs=1): err= 0: pid=69594: Mon Nov 25 12:18:22 2024 00:19:26.519 read: IOPS=3912, BW=15.3MiB/s (16.0MB/s)(15.3MiB/1002msec) 00:19:26.519 slat (usec): min=4, max=6149, avg=108.01, stdev=548.78 00:19:26.519 clat (usec): min=476, max=30090, avg=13346.73, stdev=3982.23 00:19:26.519 lat (usec): min=3553, max=30102, avg=13454.74, stdev=4035.56 00:19:26.519 clat percentiles (usec): 00:19:26.519 | 1.00th=[ 3982], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9634], 00:19:26.519 | 30.00th=[10552], 40.00th=[11338], 50.00th=[12911], 60.00th=[13698], 00:19:26.520 | 70.00th=[15270], 80.00th=[17171], 90.00th=[19006], 95.00th=[20579], 00:19:26.520 | 99.00th=[23987], 99.50th=[24773], 99.90th=[27919], 99.95th=[27919], 00:19:26.520 | 99.99th=[30016] 00:19:26.520 write: IOPS=4087, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1002msec); 0 zone resets 00:19:26.520 slat (usec): min=6, max=9902, avg=136.47, stdev=509.17 00:19:26.520 clat (usec): min=6571, max=37926, avg=18142.44, stdev=8235.93 00:19:26.520 lat (usec): min=6584, max=37941, avg=18278.90, stdev=8286.49 00:19:26.520 clat percentiles (usec): 00:19:26.520 | 1.00th=[ 7570], 5.00th=[ 7832], 10.00th=[ 8029], 20.00th=[ 9765], 00:19:26.520 | 30.00th=[11600], 40.00th=[13304], 50.00th=[16909], 60.00th=[20579], 00:19:26.520 | 70.00th=[23200], 80.00th=[25822], 90.00th=[30016], 95.00th=[33424], 00:19:26.520 | 99.00th=[36439], 99.50th=[36963], 99.90th=[38011], 99.95th=[38011], 00:19:26.520 | 99.99th=[38011] 00:19:26.520 bw ( KiB/s): min=12585, max=20208, per=21.58%, avg=16396.50, stdev=5390.27, samples=2 00:19:26.520 iops : min= 3146, max= 5052, avg=4099.00, stdev=1347.75, samples=2 00:19:26.520 lat (usec) : 500=0.01% 00:19:26.520 lat (msec) : 4=0.51%, 10=21.26%, 20=53.80%, 50=24.41% 00:19:26.520 cpu : usr=1.70%, sys=7.19%, ctx=501, majf=0, minf=21 00:19:26.520 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:26.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.520 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:26.520 issued rwts: total=3920,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.520 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:26.520 job1: (groupid=0, jobs=1): err= 0: pid=69595: Mon Nov 25 12:18:22 2024 00:19:26.520 read: IOPS=5063, BW=19.8MiB/s (20.7MB/s)(19.8MiB/1002msec) 00:19:26.520 slat (usec): min=3, max=5750, avg=101.81, stdev=497.86 00:19:26.520 clat (usec): min=481, max=25155, avg=12783.19, stdev=3106.35 00:19:26.520 lat (usec): min=3058, max=25173, avg=12884.99, stdev=3102.64 00:19:26.520 clat percentiles (usec): 00:19:26.520 | 1.00th=[ 6980], 5.00th=[ 8979], 10.00th=[ 9372], 20.00th=[10290], 00:19:26.520 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11731], 60.00th=[12518], 00:19:26.520 | 70.00th=[13960], 80.00th=[15270], 90.00th=[17433], 95.00th=[18220], 00:19:26.520 | 99.00th=[21365], 99.50th=[22676], 99.90th=[25035], 99.95th=[25035], 00:19:26.520 | 99.99th=[25035] 00:19:26.520 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:19:26.520 slat (usec): min=8, max=6174, avg=90.10, stdev=405.00 00:19:26.520 clat (usec): min=6217, max=24939, avg=11957.02, stdev=3542.02 00:19:26.520 lat (usec): min=7779, max=24956, avg=12047.12, stdev=3546.17 00:19:26.520 clat percentiles (usec): 00:19:26.520 | 1.00th=[ 7504], 5.00th=[ 7963], 10.00th=[ 8455], 20.00th=[ 9241], 00:19:26.520 | 30.00th=[ 9372], 40.00th=[ 9896], 50.00th=[10945], 60.00th=[12256], 00:19:26.520 | 70.00th=[12649], 80.00th=[14746], 90.00th=[17171], 95.00th=[19530], 00:19:26.520 | 99.00th=[22414], 99.50th=[22676], 99.90th=[25035], 99.95th=[25035], 00:19:26.520 | 99.99th=[25035] 00:19:26.520 bw ( KiB/s): min=20480, max=20521, per=26.98%, avg=20500.50, stdev=28.99, samples=2 00:19:26.520 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:19:26.520 lat (usec) : 500=0.01% 00:19:26.520 lat (msec) : 4=0.26%, 10=28.53%, 20=67.45%, 50=3.76% 00:19:26.520 cpu : usr=2.60%, sys=8.39%, ctx=465, majf=0, minf=7 00:19:26.520 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:19:26.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.520 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:26.520 issued rwts: total=5074,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.520 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:26.520 job2: (groupid=0, jobs=1): err= 0: pid=69596: Mon Nov 25 12:18:22 2024 00:19:26.520 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:19:26.520 slat (nsec): min=1997, max=7822.9k, avg=113594.03, stdev=581770.75 00:19:26.520 clat (usec): min=7934, max=21486, avg=14113.87, stdev=1964.56 00:19:26.520 lat (usec): min=7943, max=21501, avg=14227.47, stdev=2015.43 00:19:26.520 clat percentiles (usec): 00:19:26.520 | 1.00th=[ 9634], 5.00th=[10552], 10.00th=[11731], 20.00th=[12518], 00:19:26.520 | 30.00th=[13304], 40.00th=[13566], 50.00th=[14091], 60.00th=[14484], 00:19:26.520 | 70.00th=[14877], 80.00th=[15926], 90.00th=[16712], 95.00th=[17171], 00:19:26.520 | 99.00th=[18220], 99.50th=[19530], 99.90th=[20579], 99.95th=[21103], 00:19:26.520 | 99.99th=[21365] 00:19:26.520 write: IOPS=5087, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1005msec); 0 zone resets 00:19:26.520 slat (usec): min=2, max=5446, avg=89.35, stdev=392.89 00:19:26.520 clat (usec): min=4216, max=18682, avg=12162.22, stdev=2291.16 00:19:26.520 lat (usec): min=4220, max=18689, avg=12251.57, stdev=2310.95 00:19:26.520 clat percentiles (usec): 00:19:26.520 | 1.00th=[ 5342], 5.00th=[ 7111], 10.00th=[ 9503], 20.00th=[10683], 00:19:26.520 | 30.00th=[11207], 40.00th=[11731], 50.00th=[12387], 60.00th=[12911], 00:19:26.520 | 70.00th=[13435], 80.00th=[13829], 90.00th=[14877], 95.00th=[15795], 00:19:26.520 | 99.00th=[16450], 99.50th=[16909], 99.90th=[17433], 99.95th=[17957], 00:19:26.520 | 99.99th=[18744] 00:19:26.520 bw ( KiB/s): min=19408, max=20480, per=26.25%, avg=19944.00, stdev=758.02, samples=2 00:19:26.520 iops : min= 4852, max= 5120, avg=4986.00, stdev=189.50, samples=2 00:19:26.520 lat (msec) : 10=7.79%, 20=92.05%, 50=0.16% 00:19:26.520 cpu : usr=2.89%, sys=6.47%, ctx=1530, majf=0, minf=13 00:19:26.520 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:19:26.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.520 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:26.520 issued rwts: total=4608,5113,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.520 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:26.520 job3: (groupid=0, jobs=1): err= 0: pid=69597: Mon Nov 25 12:18:22 2024 00:19:26.520 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:19:26.520 slat (nsec): min=1550, max=7928.6k, avg=111438.27, stdev=547998.12 00:19:26.520 clat (usec): min=9229, max=22204, avg=14188.01, stdev=1886.79 00:19:26.520 lat (usec): min=9247, max=25758, avg=14299.45, stdev=1930.85 00:19:26.520 clat percentiles (usec): 00:19:26.520 | 1.00th=[10421], 5.00th=[11469], 10.00th=[12125], 20.00th=[12649], 00:19:26.520 | 30.00th=[13042], 40.00th=[13435], 50.00th=[13960], 60.00th=[14353], 00:19:26.520 | 70.00th=[14877], 80.00th=[15795], 90.00th=[16712], 95.00th=[17433], 00:19:26.520 | 99.00th=[19792], 99.50th=[20579], 99.90th=[21365], 99.95th=[21365], 00:19:26.520 | 99.99th=[22152] 00:19:26.520 write: IOPS=4739, BW=18.5MiB/s (19.4MB/s)(18.6MiB/1004msec); 0 zone resets 00:19:26.520 slat (usec): min=2, max=6103, avg=98.96, stdev=435.38 00:19:26.520 clat (usec): min=701, max=21740, avg=12796.57, stdev=2068.93 00:19:26.520 lat (usec): min=4580, max=21829, avg=12895.53, stdev=2090.17 00:19:26.520 clat percentiles (usec): 00:19:26.520 | 1.00th=[ 5145], 5.00th=[ 9634], 10.00th=[10421], 20.00th=[11338], 00:19:26.520 | 30.00th=[12125], 40.00th=[12518], 50.00th=[12911], 60.00th=[13304], 00:19:26.520 | 70.00th=[13566], 80.00th=[13960], 90.00th=[14877], 95.00th=[16319], 00:19:26.520 | 99.00th=[18744], 99.50th=[19268], 99.90th=[19530], 99.95th=[19792], 00:19:26.520 | 99.99th=[21627] 00:19:26.520 bw ( KiB/s): min=17096, max=20152, per=24.52%, avg=18624.00, stdev=2160.92, samples=2 00:19:26.520 iops : min= 4274, max= 5038, avg=4656.00, stdev=540.23, samples=2 00:19:26.520 lat (usec) : 750=0.01% 00:19:26.520 lat (msec) : 10=3.75%, 20=95.76%, 50=0.48% 00:19:26.520 cpu : usr=2.69%, sys=6.28%, ctx=1479, majf=0, minf=9 00:19:26.520 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:19:26.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.520 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:26.520 issued rwts: total=4608,4758,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.520 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:26.520 00:19:26.520 Run status group 0 (all jobs): 00:19:26.520 READ: bw=70.8MiB/s (74.2MB/s), 15.3MiB/s-19.8MiB/s (16.0MB/s-20.7MB/s), io=71.1MiB (74.6MB), run=1002-1005msec 00:19:26.520 WRITE: bw=74.2MiB/s (77.8MB/s), 16.0MiB/s-20.0MiB/s (16.7MB/s-20.9MB/s), io=74.6MiB (78.2MB), run=1002-1005msec 00:19:26.520 00:19:26.520 Disk stats (read/write): 00:19:26.520 nvme0n1: ios=3415/3584, merge=0/0, ticks=15082/19956, in_queue=35038, util=89.28% 00:19:26.520 nvme0n2: ios=4145/4607, merge=0/0, ticks=13703/12829, in_queue=26532, util=91.05% 00:19:26.520 nvme0n3: ios=4144/4439, merge=0/0, ticks=28274/24449, in_queue=52723, util=90.67% 00:19:26.520 nvme0n4: ios=4124/4097, merge=0/0, ticks=27460/24586, in_queue=52046, util=90.95% 00:19:26.520 12:18:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:26.777 [global] 00:19:26.777 thread=1 00:19:26.777 invalidate=1 00:19:26.777 rw=randwrite 00:19:26.777 time_based=1 00:19:26.777 runtime=1 00:19:26.777 ioengine=libaio 00:19:26.777 direct=1 00:19:26.777 bs=4096 00:19:26.777 iodepth=128 00:19:26.777 norandommap=0 00:19:26.777 numjobs=1 00:19:26.778 00:19:26.778 verify_dump=1 00:19:26.778 verify_backlog=512 00:19:26.778 verify_state_save=0 00:19:26.778 do_verify=1 00:19:26.778 verify=crc32c-intel 00:19:26.778 [job0] 00:19:26.778 filename=/dev/nvme0n1 00:19:26.778 [job1] 00:19:26.778 filename=/dev/nvme0n2 00:19:26.778 [job2] 00:19:26.778 filename=/dev/nvme0n3 00:19:26.778 [job3] 00:19:26.778 filename=/dev/nvme0n4 00:19:26.778 Could not set queue depth (nvme0n1) 00:19:26.778 Could not set queue depth (nvme0n2) 00:19:26.778 Could not set queue depth (nvme0n3) 00:19:26.778 Could not set queue depth (nvme0n4) 00:19:26.778 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:26.778 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:26.778 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:26.778 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:26.778 fio-3.35 00:19:26.778 Starting 4 threads 00:19:28.229 00:19:28.229 job0: (groupid=0, jobs=1): err= 0: pid=69658: Mon Nov 25 12:18:23 2024 00:19:28.229 read: IOPS=7153, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1002msec) 00:19:28.229 slat (usec): min=2, max=2162, avg=68.51, stdev=260.45 00:19:28.229 clat (usec): min=6218, max=11030, avg=8789.60, stdev=662.10 00:19:28.229 lat (usec): min=6773, max=11371, avg=8858.11, stdev=646.36 00:19:28.229 clat percentiles (usec): 00:19:28.229 | 1.00th=[ 7177], 5.00th=[ 7570], 10.00th=[ 7898], 20.00th=[ 8291], 00:19:28.229 | 30.00th=[ 8586], 40.00th=[ 8717], 50.00th=[ 8848], 60.00th=[ 8979], 00:19:28.229 | 70.00th=[ 9110], 80.00th=[ 9241], 90.00th=[ 9634], 95.00th=[ 9896], 00:19:28.229 | 99.00th=[10159], 99.50th=[10421], 99.90th=[10945], 99.95th=[10945], 00:19:28.229 | 99.99th=[11076] 00:19:28.229 write: IOPS=7457, BW=29.1MiB/s (30.5MB/s)(29.2MiB/1002msec); 0 zone resets 00:19:28.229 slat (usec): min=5, max=2290, avg=64.02, stdev=225.62 00:19:28.229 clat (usec): min=1663, max=10861, avg=8532.20, stdev=724.04 00:19:28.229 lat (usec): min=2047, max=10873, avg=8596.21, stdev=698.73 00:19:28.229 clat percentiles (usec): 00:19:28.229 | 1.00th=[ 6456], 5.00th=[ 7373], 10.00th=[ 8029], 20.00th=[ 8291], 00:19:28.229 | 30.00th=[ 8356], 40.00th=[ 8455], 50.00th=[ 8586], 60.00th=[ 8586], 00:19:28.229 | 70.00th=[ 8717], 80.00th=[ 8848], 90.00th=[ 9241], 95.00th=[ 9634], 00:19:28.229 | 99.00th=[10159], 99.50th=[10159], 99.90th=[10421], 99.95th=[10552], 00:19:28.229 | 99.99th=[10814] 00:19:28.229 bw ( KiB/s): min=28992, max=29768, per=26.30%, avg=29380.00, stdev=548.71, samples=2 00:19:28.229 iops : min= 7248, max= 7442, avg=7345.00, stdev=137.18, samples=2 00:19:28.229 lat (msec) : 2=0.01%, 4=0.22%, 10=97.10%, 20=2.68% 00:19:28.229 cpu : usr=2.50%, sys=11.99%, ctx=1087, majf=0, minf=11 00:19:28.229 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:19:28.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:28.229 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:28.229 issued rwts: total=7168,7472,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:28.229 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:28.229 job1: (groupid=0, jobs=1): err= 0: pid=69659: Mon Nov 25 12:18:23 2024 00:19:28.229 read: IOPS=7657, BW=29.9MiB/s (31.4MB/s)(30.0MiB/1003msec) 00:19:28.229 slat (usec): min=2, max=5105, avg=64.02, stdev=269.28 00:19:28.229 clat (usec): min=6284, max=13141, avg=8334.79, stdev=716.69 00:19:28.229 lat (usec): min=6291, max=13146, avg=8398.81, stdev=743.26 00:19:28.229 clat percentiles (usec): 00:19:28.229 | 1.00th=[ 6587], 5.00th=[ 7308], 10.00th=[ 7635], 20.00th=[ 7898], 00:19:28.229 | 30.00th=[ 8029], 40.00th=[ 8160], 50.00th=[ 8225], 60.00th=[ 8356], 00:19:28.229 | 70.00th=[ 8455], 80.00th=[ 8717], 90.00th=[ 9241], 95.00th=[ 9634], 00:19:28.229 | 99.00th=[10552], 99.50th=[10683], 99.90th=[13042], 99.95th=[13042], 00:19:28.229 | 99.99th=[13173] 00:19:28.229 write: IOPS=7762, BW=30.3MiB/s (31.8MB/s)(30.4MiB/1003msec); 0 zone resets 00:19:28.229 slat (usec): min=5, max=6750, avg=61.33, stdev=281.22 00:19:28.229 clat (usec): min=1920, max=14462, avg=8074.59, stdev=906.25 00:19:28.229 lat (usec): min=2238, max=14482, avg=8135.92, stdev=932.02 00:19:28.229 clat percentiles (usec): 00:19:28.229 | 1.00th=[ 5932], 5.00th=[ 7111], 10.00th=[ 7504], 20.00th=[ 7701], 00:19:28.229 | 30.00th=[ 7832], 40.00th=[ 7898], 50.00th=[ 7963], 60.00th=[ 8094], 00:19:28.229 | 70.00th=[ 8225], 80.00th=[ 8356], 90.00th=[ 8717], 95.00th=[ 9503], 00:19:28.229 | 99.00th=[12256], 99.50th=[12649], 99.90th=[12911], 99.95th=[12911], 00:19:28.229 | 99.99th=[14484] 00:19:28.229 bw ( KiB/s): min=28680, max=32825, per=27.53%, avg=30752.50, stdev=2930.96, samples=2 00:19:28.229 iops : min= 7170, max= 8206, avg=7688.00, stdev=732.56, samples=2 00:19:28.229 lat (msec) : 2=0.01%, 4=0.21%, 10=96.83%, 20=2.96% 00:19:28.229 cpu : usr=2.99%, sys=11.28%, ctx=765, majf=0, minf=12 00:19:28.229 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:19:28.229 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:28.229 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:28.229 issued rwts: total=7680,7786,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:28.229 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:28.229 job2: (groupid=0, jobs=1): err= 0: pid=69660: Mon Nov 25 12:18:23 2024 00:19:28.229 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:19:28.229 slat (usec): min=3, max=3060, avg=93.97, stdev=356.74 00:19:28.229 clat (usec): min=9133, max=15919, avg=12121.94, stdev=915.53 00:19:28.229 lat (usec): min=9315, max=16035, avg=12215.91, stdev=882.27 00:19:28.229 clat percentiles (usec): 00:19:28.229 | 1.00th=[ 9896], 5.00th=[10683], 10.00th=[10814], 20.00th=[11731], 00:19:28.229 | 30.00th=[11863], 40.00th=[11994], 50.00th=[12125], 60.00th=[12125], 00:19:28.229 | 70.00th=[12256], 80.00th=[12518], 90.00th=[13435], 95.00th=[14091], 00:19:28.229 | 99.00th=[14484], 99.50th=[14484], 99.90th=[15664], 99.95th=[15926], 00:19:28.229 | 99.99th=[15926] 00:19:28.229 write: IOPS=5575, BW=21.8MiB/s (22.8MB/s)(21.8MiB/1002msec); 0 zone resets 00:19:28.229 slat (usec): min=6, max=4333, avg=88.80, stdev=385.23 00:19:28.229 clat (usec): min=238, max=14043, avg=11558.99, stdev=1117.46 00:19:28.229 lat (usec): min=2579, max=14332, avg=11647.79, stdev=1057.71 00:19:28.229 clat percentiles (usec): 00:19:28.229 | 1.00th=[ 5800], 5.00th=[10290], 10.00th=[11076], 20.00th=[11338], 00:19:28.229 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11600], 60.00th=[11731], 00:19:28.229 | 70.00th=[11863], 80.00th=[11863], 90.00th=[12518], 95.00th=[13173], 00:19:28.229 | 99.00th=[13698], 99.50th=[13829], 99.90th=[13960], 99.95th=[13960], 00:19:28.229 | 99.99th=[14091] 00:19:28.229 bw ( KiB/s): min=21178, max=22536, per=19.57%, avg=21857.00, stdev=960.25, samples=2 00:19:28.229 iops : min= 5294, max= 5634, avg=5464.00, stdev=240.42, samples=2 00:19:28.229 lat (usec) : 250=0.01% 00:19:28.229 lat (msec) : 4=0.30%, 10=2.55%, 20=97.14% 00:19:28.230 cpu : usr=2.10%, sys=9.29%, ctx=590, majf=0, minf=13 00:19:28.230 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:19:28.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:28.230 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:28.230 issued rwts: total=5120,5587,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:28.230 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:28.230 job3: (groupid=0, jobs=1): err= 0: pid=69661: Mon Nov 25 12:18:23 2024 00:19:28.230 read: IOPS=6649, BW=26.0MiB/s (27.2MB/s)(26.0MiB/1001msec) 00:19:28.230 slat (usec): min=3, max=2260, avg=71.75, stdev=276.10 00:19:28.230 clat (usec): min=7154, max=11485, avg=9354.69, stdev=615.44 00:19:28.230 lat (usec): min=7398, max=11494, avg=9426.44, stdev=608.20 00:19:28.230 clat percentiles (usec): 00:19:28.230 | 1.00th=[ 7767], 5.00th=[ 8291], 10.00th=[ 8455], 20.00th=[ 8979], 00:19:28.230 | 30.00th=[ 9110], 40.00th=[ 9241], 50.00th=[ 9372], 60.00th=[ 9503], 00:19:28.230 | 70.00th=[ 9634], 80.00th=[ 9896], 90.00th=[10159], 95.00th=[10421], 00:19:28.230 | 99.00th=[10814], 99.50th=[10945], 99.90th=[11338], 99.95th=[11469], 00:19:28.230 | 99.99th=[11469] 00:19:28.230 write: IOPS=7156, BW=28.0MiB/s (29.3MB/s)(28.0MiB/1001msec); 0 zone resets 00:19:28.230 slat (usec): min=6, max=2395, avg=68.88, stdev=291.86 00:19:28.230 clat (usec): min=236, max=11218, avg=8985.42, stdev=867.16 00:19:28.230 lat (usec): min=2008, max=11229, avg=9054.30, stdev=832.21 00:19:28.230 clat percentiles (usec): 00:19:28.230 | 1.00th=[ 6587], 5.00th=[ 7504], 10.00th=[ 8029], 20.00th=[ 8848], 00:19:28.230 | 30.00th=[ 8848], 40.00th=[ 8979], 50.00th=[ 9110], 60.00th=[ 9110], 00:19:28.230 | 70.00th=[ 9241], 80.00th=[ 9503], 90.00th=[ 9896], 95.00th=[10028], 00:19:28.230 | 99.00th=[10683], 99.50th=[10814], 99.90th=[11076], 99.95th=[11076], 00:19:28.230 | 99.99th=[11207] 00:19:28.230 bw ( KiB/s): min=27616, max=28729, per=25.22%, avg=28172.50, stdev=787.01, samples=2 00:19:28.230 iops : min= 6904, max= 7182, avg=7043.00, stdev=196.58, samples=2 00:19:28.230 lat (usec) : 250=0.01% 00:19:28.230 lat (msec) : 4=0.23%, 10=89.58%, 20=10.18% 00:19:28.230 cpu : usr=2.60%, sys=11.60%, ctx=759, majf=0, minf=13 00:19:28.230 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:19:28.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:28.230 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:28.230 issued rwts: total=6656,7164,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:28.230 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:28.230 00:19:28.230 Run status group 0 (all jobs): 00:19:28.230 READ: bw=104MiB/s (109MB/s), 20.0MiB/s-29.9MiB/s (20.9MB/s-31.4MB/s), io=104MiB (109MB), run=1001-1003msec 00:19:28.230 WRITE: bw=109MiB/s (114MB/s), 21.8MiB/s-30.3MiB/s (22.8MB/s-31.8MB/s), io=109MiB (115MB), run=1001-1003msec 00:19:28.230 00:19:28.230 Disk stats (read/write): 00:19:28.230 nvme0n1: ios=6257/6656, merge=0/0, ticks=13246/12929, in_queue=26175, util=89.28% 00:19:28.230 nvme0n2: ios=6705/6987, merge=0/0, ticks=17832/16265, in_queue=34097, util=90.43% 00:19:28.230 nvme0n3: ios=4651/4784, merge=0/0, ticks=13929/12783, in_queue=26712, util=90.96% 00:19:28.230 nvme0n4: ios=5981/6144, merge=0/0, ticks=13460/12512, in_queue=25972, util=90.31% 00:19:28.230 12:18:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:19:28.230 12:18:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=69674 00:19:28.230 12:18:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:28.230 12:18:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:19:28.230 [global] 00:19:28.230 thread=1 00:19:28.230 invalidate=1 00:19:28.230 rw=read 00:19:28.230 time_based=1 00:19:28.230 runtime=10 00:19:28.230 ioengine=libaio 00:19:28.230 direct=1 00:19:28.230 bs=4096 00:19:28.230 iodepth=1 00:19:28.230 norandommap=1 00:19:28.230 numjobs=1 00:19:28.230 00:19:28.230 [job0] 00:19:28.230 filename=/dev/nvme0n1 00:19:28.230 [job1] 00:19:28.230 filename=/dev/nvme0n2 00:19:28.230 [job2] 00:19:28.230 filename=/dev/nvme0n3 00:19:28.230 [job3] 00:19:28.230 filename=/dev/nvme0n4 00:19:28.230 Could not set queue depth (nvme0n1) 00:19:28.230 Could not set queue depth (nvme0n2) 00:19:28.230 Could not set queue depth (nvme0n3) 00:19:28.230 Could not set queue depth (nvme0n4) 00:19:28.230 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:28.230 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:28.230 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:28.230 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:28.230 fio-3.35 00:19:28.230 Starting 4 threads 00:19:31.517 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:31.517 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=109764608, buflen=4096 00:19:31.517 fio: pid=69717, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:19:31.517 12:18:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:31.517 fio: pid=69716, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:19:31.517 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=115658752, buflen=4096 00:19:31.518 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:31.518 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:31.518 fio: pid=69714, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:19:31.518 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=26329088, buflen=4096 00:19:31.518 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:31.518 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:31.775 fio: pid=69715, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:19:31.775 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=62779392, buflen=4096 00:19:31.775 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:31.775 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:31.775 00:19:31.775 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=69714: Mon Nov 25 12:18:27 2024 00:19:31.775 read: IOPS=6873, BW=26.8MiB/s (28.2MB/s)(89.1MiB/3319msec) 00:19:31.775 slat (usec): min=4, max=14199, avg= 8.57, stdev=136.26 00:19:31.776 clat (usec): min=78, max=2568, avg=136.38, stdev=41.37 00:19:31.776 lat (usec): min=85, max=16768, avg=144.95, stdev=152.82 00:19:31.776 clat percentiles (usec): 00:19:31.776 | 1.00th=[ 85], 5.00th=[ 91], 10.00th=[ 99], 20.00th=[ 106], 00:19:31.776 | 30.00th=[ 113], 40.00th=[ 125], 50.00th=[ 133], 60.00th=[ 147], 00:19:31.776 | 70.00th=[ 157], 80.00th=[ 163], 90.00th=[ 172], 95.00th=[ 182], 00:19:31.776 | 99.00th=[ 217], 99.50th=[ 265], 99.90th=[ 537], 99.95th=[ 611], 00:19:31.776 | 99.99th=[ 1090] 00:19:31.776 bw ( KiB/s): min=26016, max=30512, per=26.57%, avg=28017.33, stdev=1778.84, samples=6 00:19:31.776 iops : min= 6504, max= 7628, avg=7004.33, stdev=444.71, samples=6 00:19:31.776 lat (usec) : 100=11.33%, 250=88.15%, 500=0.38%, 750=0.11%, 1000=0.01% 00:19:31.776 lat (msec) : 2=0.01%, 4=0.01% 00:19:31.776 cpu : usr=0.48%, sys=4.28%, ctx=22823, majf=0, minf=1 00:19:31.776 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:31.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.776 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.776 issued rwts: total=22813,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.776 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:31.776 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=69715: Mon Nov 25 12:18:27 2024 00:19:31.776 read: IOPS=4337, BW=16.9MiB/s (17.8MB/s)(59.9MiB/3534msec) 00:19:31.776 slat (usec): min=4, max=12015, avg=10.95, stdev=206.95 00:19:31.776 clat (usec): min=67, max=4191, avg=219.04, stdev=87.81 00:19:31.776 lat (usec): min=74, max=12125, avg=229.99, stdev=224.06 00:19:31.776 clat percentiles (usec): 00:19:31.776 | 1.00th=[ 89], 5.00th=[ 95], 10.00th=[ 104], 20.00th=[ 157], 00:19:31.776 | 30.00th=[ 172], 40.00th=[ 210], 50.00th=[ 229], 60.00th=[ 255], 00:19:31.776 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 297], 00:19:31.776 | 99.00th=[ 359], 99.50th=[ 445], 99.90th=[ 873], 99.95th=[ 988], 00:19:31.776 | 99.99th=[ 3884] 00:19:31.776 bw ( KiB/s): min=14160, max=19424, per=15.01%, avg=15829.33, stdev=2073.54, samples=6 00:19:31.776 iops : min= 3540, max= 4856, avg=3957.33, stdev=518.38, samples=6 00:19:31.776 lat (usec) : 100=8.29%, 250=49.99%, 500=41.37%, 750=0.20%, 1000=0.10% 00:19:31.776 lat (msec) : 2=0.03%, 4=0.01%, 10=0.01% 00:19:31.776 cpu : usr=0.57%, sys=2.63%, ctx=15336, majf=0, minf=2 00:19:31.776 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:31.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.776 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.776 issued rwts: total=15328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.776 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:31.776 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=69716: Mon Nov 25 12:18:27 2024 00:19:31.776 read: IOPS=9053, BW=35.4MiB/s (37.1MB/s)(110MiB/3119msec) 00:19:31.776 slat (usec): min=4, max=9730, avg= 6.98, stdev=70.93 00:19:31.776 clat (usec): min=76, max=7326, avg=102.94, stdev=46.06 00:19:31.776 lat (usec): min=81, max=9844, avg=109.93, stdev=84.84 00:19:31.776 clat percentiles (usec): 00:19:31.776 | 1.00th=[ 87], 5.00th=[ 91], 10.00th=[ 93], 20.00th=[ 95], 00:19:31.776 | 30.00th=[ 97], 40.00th=[ 98], 50.00th=[ 100], 60.00th=[ 102], 00:19:31.776 | 70.00th=[ 104], 80.00th=[ 108], 90.00th=[ 116], 95.00th=[ 127], 00:19:31.776 | 99.00th=[ 149], 99.50th=[ 159], 99.90th=[ 269], 99.95th=[ 355], 00:19:31.776 | 99.99th=[ 725] 00:19:31.776 bw ( KiB/s): min=35600, max=36784, per=34.31%, avg=36181.33, stdev=449.42, samples=6 00:19:31.776 iops : min= 8900, max= 9196, avg=9045.33, stdev=112.35, samples=6 00:19:31.776 lat (usec) : 100=52.05%, 250=47.83%, 500=0.10%, 750=0.01% 00:19:31.776 lat (msec) : 2=0.01%, 10=0.01% 00:19:31.776 cpu : usr=0.67%, sys=5.55%, ctx=28241, majf=0, minf=2 00:19:31.776 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:31.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.776 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.776 issued rwts: total=28238,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.776 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:31.776 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=69717: Mon Nov 25 12:18:27 2024 00:19:31.776 read: IOPS=9180, BW=35.9MiB/s (37.6MB/s)(105MiB/2919msec) 00:19:31.776 slat (nsec): min=5811, max=78868, avg=7074.11, stdev=2342.92 00:19:31.776 clat (usec): min=67, max=1424, avg=101.30, stdev=14.08 00:19:31.776 lat (usec): min=87, max=1430, avg=108.37, stdev=14.44 00:19:31.776 clat percentiles (usec): 00:19:31.776 | 1.00th=[ 87], 5.00th=[ 90], 10.00th=[ 92], 20.00th=[ 94], 00:19:31.776 | 30.00th=[ 96], 40.00th=[ 97], 50.00th=[ 99], 60.00th=[ 101], 00:19:31.776 | 70.00th=[ 103], 80.00th=[ 106], 90.00th=[ 114], 95.00th=[ 122], 00:19:31.776 | 99.00th=[ 147], 99.50th=[ 153], 99.90th=[ 184], 99.95th=[ 227], 00:19:31.776 | 99.99th=[ 334] 00:19:31.776 bw ( KiB/s): min=35784, max=37152, per=34.85%, avg=36752.00, stdev=554.17, samples=5 00:19:31.776 iops : min= 8946, max= 9288, avg=9188.00, stdev=138.54, samples=5 00:19:31.776 lat (usec) : 100=57.11%, 250=42.85%, 500=0.03% 00:19:31.776 lat (msec) : 2=0.01% 00:19:31.776 cpu : usr=0.86%, sys=5.89%, ctx=26813, majf=0, minf=1 00:19:31.776 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:31.776 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.776 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:31.776 issued rwts: total=26799,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:31.776 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:31.776 00:19:31.776 Run status group 0 (all jobs): 00:19:31.776 READ: bw=103MiB/s (108MB/s), 16.9MiB/s-35.9MiB/s (17.8MB/s-37.6MB/s), io=364MiB (382MB), run=2919-3534msec 00:19:31.776 00:19:31.776 Disk stats (read/write): 00:19:31.776 nvme0n1: ios=21669/0, merge=0/0, ticks=2969/0, in_queue=2969, util=95.59% 00:19:31.776 nvme0n2: ios=13975/0, merge=0/0, ticks=3226/0, in_queue=3226, util=95.24% 00:19:31.776 nvme0n3: ios=26168/0, merge=0/0, ticks=2720/0, in_queue=2720, util=96.62% 00:19:31.776 nvme0n4: ios=26508/0, merge=0/0, ticks=2738/0, in_queue=2738, util=96.81% 00:19:32.035 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:32.035 12:18:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:32.292 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:32.292 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:32.550 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:32.550 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:32.550 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:32.550 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:32.826 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:19:32.826 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 69674 00:19:32.826 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:19:32.826 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:32.826 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:32.826 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:32.826 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:19:32.826 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:32.826 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:32.826 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:32.826 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:32.826 nvmf hotplug test: fio failed as expected 00:19:32.826 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:19:32.826 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:32.826 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:32.826 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:33.085 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:33.085 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:33.085 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:33.085 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:33.086 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:19:33.086 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:33.086 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:19:33.086 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:33.086 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:19:33.086 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:33.086 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:33.086 rmmod nvme_tcp 00:19:33.086 rmmod nvme_fabrics 00:19:33.086 rmmod nvme_keyring 00:19:33.086 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:33.086 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:19:33.086 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:19:33.086 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 69188 ']' 00:19:33.086 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 69188 00:19:33.086 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 69188 ']' 00:19:33.086 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 69188 00:19:33.086 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:19:33.086 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:33.086 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69188 00:19:33.086 killing process with pid 69188 00:19:33.086 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:33.086 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:33.086 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69188' 00:19:33.086 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 69188 00:19:33.086 12:18:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 69188 00:19:33.344 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:33.344 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:33.344 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:33.344 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:19:33.344 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:19:33.344 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:33.344 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:19:33.344 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:33.344 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:33.344 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:33.344 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:33.344 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:33.344 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:33.344 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:33.344 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:33.344 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:33.344 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:33.344 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:33.344 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:33.344 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:33.344 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:33.344 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:33.344 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:33.344 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.344 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:33.344 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.601 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:19:33.601 00:19:33.601 real 0m17.779s 00:19:33.601 user 1m7.295s 00:19:33.601 sys 0m7.081s 00:19:33.601 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:33.601 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.601 ************************************ 00:19:33.601 END TEST nvmf_fio_target 00:19:33.601 ************************************ 00:19:33.601 12:18:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:33.601 12:18:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:33.601 12:18:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:33.601 12:18:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:19:33.601 ************************************ 00:19:33.601 START TEST nvmf_bdevio 00:19:33.601 ************************************ 00:19:33.601 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:33.601 * Looking for test storage... 00:19:33.601 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:33.601 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:33.601 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:19:33.601 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:33.601 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:33.601 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:33.601 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:33.601 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:33.601 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:19:33.601 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:19:33.601 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:19:33.601 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:19:33.601 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:19:33.601 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:19:33.601 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:19:33.601 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:33.601 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:19:33.601 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:19:33.601 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:33.601 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:33.601 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:19:33.601 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:19:33.601 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:33.601 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:19:33.601 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:19:33.601 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:19:33.601 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:19:33.601 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:33.601 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:19:33.601 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:19:33.601 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:33.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.602 --rc genhtml_branch_coverage=1 00:19:33.602 --rc genhtml_function_coverage=1 00:19:33.602 --rc genhtml_legend=1 00:19:33.602 --rc geninfo_all_blocks=1 00:19:33.602 --rc geninfo_unexecuted_blocks=1 00:19:33.602 00:19:33.602 ' 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:33.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.602 --rc genhtml_branch_coverage=1 00:19:33.602 --rc genhtml_function_coverage=1 00:19:33.602 --rc genhtml_legend=1 00:19:33.602 --rc geninfo_all_blocks=1 00:19:33.602 --rc geninfo_unexecuted_blocks=1 00:19:33.602 00:19:33.602 ' 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:33.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.602 --rc genhtml_branch_coverage=1 00:19:33.602 --rc genhtml_function_coverage=1 00:19:33.602 --rc genhtml_legend=1 00:19:33.602 --rc geninfo_all_blocks=1 00:19:33.602 --rc geninfo_unexecuted_blocks=1 00:19:33.602 00:19:33.602 ' 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:33.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.602 --rc genhtml_branch_coverage=1 00:19:33.602 --rc genhtml_function_coverage=1 00:19:33.602 --rc genhtml_legend=1 00:19:33.602 --rc geninfo_all_blocks=1 00:19:33.602 --rc geninfo_unexecuted_blocks=1 00:19:33.602 00:19:33.602 ' 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:33.602 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:33.602 Cannot find device "nvmf_init_br" 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:33.602 Cannot find device "nvmf_init_br2" 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:33.602 Cannot find device "nvmf_tgt_br" 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:33.602 Cannot find device "nvmf_tgt_br2" 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:33.602 Cannot find device "nvmf_init_br" 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:19:33.602 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:33.861 Cannot find device "nvmf_init_br2" 00:19:33.861 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:19:33.861 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:33.861 Cannot find device "nvmf_tgt_br" 00:19:33.861 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:19:33.861 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:33.861 Cannot find device "nvmf_tgt_br2" 00:19:33.861 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:19:33.861 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:33.861 Cannot find device "nvmf_br" 00:19:33.861 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:19:33.861 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:33.861 Cannot find device "nvmf_init_if" 00:19:33.861 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:19:33.861 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:33.861 Cannot find device "nvmf_init_if2" 00:19:33.861 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:19:33.861 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:33.861 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:33.861 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:19:33.861 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:33.861 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:33.861 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:19:33.861 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:33.861 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:33.861 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:33.861 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:33.861 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:33.861 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:33.861 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:33.861 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:33.861 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:33.861 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:33.861 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:33.861 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:33.861 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:33.861 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:33.861 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:33.861 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:33.861 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:33.861 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:33.861 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:33.861 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:33.861 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:33.861 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:33.861 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:33.861 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:33.861 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:33.861 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:33.861 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:33.861 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:33.861 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:33.861 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:33.861 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:33.861 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:33.861 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:33.861 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:33.861 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:19:33.861 00:19:33.861 --- 10.0.0.3 ping statistics --- 00:19:33.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.861 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:19:33.861 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:33.861 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:33.861 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.035 ms 00:19:33.861 00:19:33.861 --- 10.0.0.4 ping statistics --- 00:19:33.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.861 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:19:33.861 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:33.862 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:33.862 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:19:33.862 00:19:33.862 --- 10.0.0.1 ping statistics --- 00:19:33.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.862 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:19:33.862 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:33.862 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:33.862 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:19:33.862 00:19:33.862 --- 10.0.0.2 ping statistics --- 00:19:33.862 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:33.862 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:19:33.862 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:33.862 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:19:33.862 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:33.862 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:33.862 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:33.862 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:33.862 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:33.862 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:33.862 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:33.862 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:33.862 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:33.862 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:33.862 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:33.862 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=70088 00:19:33.862 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 70088 00:19:33.862 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:33.862 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 70088 ']' 00:19:33.862 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:33.862 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:33.862 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:33.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:33.862 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:33.862 12:18:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:34.120 [2024-11-25 12:18:29.756907] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:19:34.120 [2024-11-25 12:18:29.756956] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:34.120 [2024-11-25 12:18:29.898845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:34.120 [2024-11-25 12:18:29.946340] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:34.120 [2024-11-25 12:18:29.946379] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:34.120 [2024-11-25 12:18:29.946386] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:34.120 [2024-11-25 12:18:29.946391] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:34.120 [2024-11-25 12:18:29.946395] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:34.120 [2024-11-25 12:18:29.948189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:34.120 [2024-11-25 12:18:29.948337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:19:34.120 [2024-11-25 12:18:29.948375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:19:34.120 [2024-11-25 12:18:29.948854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:35.051 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:35.051 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:19:35.051 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:35.051 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:35.051 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:35.051 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:35.051 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:35.051 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.051 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:35.051 [2024-11-25 12:18:30.691814] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:35.051 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.051 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:35.051 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.051 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:35.051 Malloc0 00:19:35.051 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.051 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:35.051 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.051 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:35.051 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.051 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:35.051 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.051 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:35.051 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.051 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:35.051 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.051 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:35.051 [2024-11-25 12:18:30.753637] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:35.051 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.051 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:35.051 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:35.051 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:19:35.051 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:19:35.051 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:35.051 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:35.051 { 00:19:35.051 "params": { 00:19:35.051 "name": "Nvme$subsystem", 00:19:35.051 "trtype": "$TEST_TRANSPORT", 00:19:35.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:35.051 "adrfam": "ipv4", 00:19:35.051 "trsvcid": "$NVMF_PORT", 00:19:35.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:35.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:35.051 "hdgst": ${hdgst:-false}, 00:19:35.051 "ddgst": ${ddgst:-false} 00:19:35.051 }, 00:19:35.051 "method": "bdev_nvme_attach_controller" 00:19:35.051 } 00:19:35.051 EOF 00:19:35.051 )") 00:19:35.051 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:19:35.051 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:19:35.051 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:19:35.051 12:18:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:35.051 "params": { 00:19:35.051 "name": "Nvme1", 00:19:35.051 "trtype": "tcp", 00:19:35.051 "traddr": "10.0.0.3", 00:19:35.051 "adrfam": "ipv4", 00:19:35.051 "trsvcid": "4420", 00:19:35.051 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.051 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:35.051 "hdgst": false, 00:19:35.051 "ddgst": false 00:19:35.051 }, 00:19:35.051 "method": "bdev_nvme_attach_controller" 00:19:35.051 }' 00:19:35.051 [2024-11-25 12:18:30.791409] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:19:35.051 [2024-11-25 12:18:30.791467] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70142 ] 00:19:35.308 [2024-11-25 12:18:30.935591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:35.308 [2024-11-25 12:18:30.973121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:35.308 [2024-11-25 12:18:30.973196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:35.308 [2024-11-25 12:18:30.973217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.308 I/O targets: 00:19:35.308 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:35.308 00:19:35.308 00:19:35.308 CUnit - A unit testing framework for C - Version 2.1-3 00:19:35.308 http://cunit.sourceforge.net/ 00:19:35.308 00:19:35.308 00:19:35.308 Suite: bdevio tests on: Nvme1n1 00:19:35.308 Test: blockdev write read block ...passed 00:19:35.566 Test: blockdev write zeroes read block ...passed 00:19:35.566 Test: blockdev write zeroes read no split ...passed 00:19:35.566 Test: blockdev write zeroes read split ...passed 00:19:35.566 Test: blockdev write zeroes read split partial ...passed 00:19:35.566 Test: blockdev reset ...[2024-11-25 12:18:31.225539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:35.566 [2024-11-25 12:18:31.225634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x154ebc0 (9): Bad file descriptor 00:19:35.566 passed 00:19:35.566 Test: blockdev write read 8 blocks ...[2024-11-25 12:18:31.243577] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:19:35.566 passed 00:19:35.566 Test: blockdev write read size > 128k ...passed 00:19:35.566 Test: blockdev write read invalid size ...passed 00:19:35.566 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:35.566 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:35.566 Test: blockdev write read max offset ...passed 00:19:35.566 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:35.566 Test: blockdev writev readv 8 blocks ...passed 00:19:35.566 Test: blockdev writev readv 30 x 1block ...passed 00:19:35.566 Test: blockdev writev readv block ...passed 00:19:35.566 Test: blockdev writev readv size > 128k ...passed 00:19:35.566 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:35.566 Test: blockdev comparev and writev ...[2024-11-25 12:18:31.411832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:35.566 [2024-11-25 12:18:31.411871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:35.566 [2024-11-25 12:18:31.411885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:35.566 [2024-11-25 12:18:31.411892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:35.566 [2024-11-25 12:18:31.412081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:35.566 [2024-11-25 12:18:31.412090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:35.566 [2024-11-25 12:18:31.412102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:35.566 [2024-11-25 12:18:31.412108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:35.566 [2024-11-25 12:18:31.412284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:35.566 [2024-11-25 12:18:31.412296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:35.566 [2024-11-25 12:18:31.412307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:35.566 [2024-11-25 12:18:31.412313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:35.566 [2024-11-25 12:18:31.412626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:35.566 [2024-11-25 12:18:31.412642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:35.566 [2024-11-25 12:18:31.412654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:35.566 [2024-11-25 12:18:31.412660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:35.824 passed 00:19:35.824 Test: blockdev nvme passthru rw ...passed 00:19:35.824 Test: blockdev nvme passthru vendor specific ...passed 00:19:35.824 Test: blockdev nvme admin passthru ...[2024-11-25 12:18:31.495133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:35.824 [2024-11-25 12:18:31.495162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:35.824 [2024-11-25 12:18:31.495234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:35.824 [2024-11-25 12:18:31.495242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:35.824 [2024-11-25 12:18:31.495307] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:35.824 [2024-11-25 12:18:31.495314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:35.824 [2024-11-25 12:18:31.495381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:35.824 [2024-11-25 12:18:31.495388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:35.824 passed 00:19:35.824 Test: blockdev copy ...passed 00:19:35.824 00:19:35.824 Run Summary: Type Total Ran Passed Failed Inactive 00:19:35.824 suites 1 1 n/a 0 0 00:19:35.824 tests 23 23 23 0 0 00:19:35.824 asserts 152 152 152 0 n/a 00:19:35.824 00:19:35.824 Elapsed time = 0.870 seconds 00:19:35.824 12:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:35.824 12:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.824 12:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:35.824 12:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.824 12:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:35.824 12:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:19:35.824 12:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:35.824 12:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:19:35.824 12:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:35.824 12:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:19:35.824 12:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:35.825 12:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:36.083 rmmod nvme_tcp 00:19:36.083 rmmod nvme_fabrics 00:19:36.083 rmmod nvme_keyring 00:19:36.083 12:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:36.083 12:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:19:36.083 12:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:19:36.083 12:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 70088 ']' 00:19:36.083 12:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 70088 00:19:36.083 12:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 70088 ']' 00:19:36.083 12:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 70088 00:19:36.083 12:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:19:36.083 12:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:36.083 12:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70088 00:19:36.083 12:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:19:36.083 12:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:19:36.083 killing process with pid 70088 00:19:36.083 12:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70088' 00:19:36.083 12:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 70088 00:19:36.083 12:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 70088 00:19:36.083 12:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:36.083 12:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:36.083 12:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:36.083 12:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:19:36.083 12:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:19:36.083 12:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:19:36.083 12:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:36.083 12:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:36.083 12:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:36.083 12:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:36.083 12:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:36.083 12:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:36.083 12:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:36.083 12:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:36.083 12:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:36.340 12:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:36.340 12:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:36.340 12:18:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:36.340 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:36.340 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:36.340 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:36.340 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:36.340 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:36.340 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:36.340 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:36.340 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:36.340 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:19:36.340 00:19:36.340 real 0m2.842s 00:19:36.340 user 0m9.612s 00:19:36.340 sys 0m0.610s 00:19:36.340 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:36.340 ************************************ 00:19:36.340 END TEST nvmf_bdevio 00:19:36.340 ************************************ 00:19:36.340 12:18:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:36.340 12:18:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:36.340 00:19:36.341 real 3m24.417s 00:19:36.341 user 10m39.539s 00:19:36.341 sys 0m46.305s 00:19:36.341 12:18:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:36.341 12:18:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:19:36.341 ************************************ 00:19:36.341 END TEST nvmf_target_core 00:19:36.341 ************************************ 00:19:36.341 12:18:32 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:19:36.341 12:18:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:36.341 12:18:32 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:36.341 12:18:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:36.341 ************************************ 00:19:36.341 START TEST nvmf_target_extra 00:19:36.341 ************************************ 00:19:36.341 12:18:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:19:36.599 * Looking for test storage... 00:19:36.599 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:36.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:36.599 --rc genhtml_branch_coverage=1 00:19:36.599 --rc genhtml_function_coverage=1 00:19:36.599 --rc genhtml_legend=1 00:19:36.599 --rc geninfo_all_blocks=1 00:19:36.599 --rc geninfo_unexecuted_blocks=1 00:19:36.599 00:19:36.599 ' 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:36.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:36.599 --rc genhtml_branch_coverage=1 00:19:36.599 --rc genhtml_function_coverage=1 00:19:36.599 --rc genhtml_legend=1 00:19:36.599 --rc geninfo_all_blocks=1 00:19:36.599 --rc geninfo_unexecuted_blocks=1 00:19:36.599 00:19:36.599 ' 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:36.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:36.599 --rc genhtml_branch_coverage=1 00:19:36.599 --rc genhtml_function_coverage=1 00:19:36.599 --rc genhtml_legend=1 00:19:36.599 --rc geninfo_all_blocks=1 00:19:36.599 --rc geninfo_unexecuted_blocks=1 00:19:36.599 00:19:36.599 ' 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:36.599 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:36.599 --rc genhtml_branch_coverage=1 00:19:36.599 --rc genhtml_function_coverage=1 00:19:36.599 --rc genhtml_legend=1 00:19:36.599 --rc geninfo_all_blocks=1 00:19:36.599 --rc geninfo_unexecuted_blocks=1 00:19:36.599 00:19:36.599 ' 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.599 12:18:32 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:36.600 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:36.600 ************************************ 00:19:36.600 START TEST nvmf_example 00:19:36.600 ************************************ 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:19:36.600 * Looking for test storage... 00:19:36.600 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:36.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:36.600 --rc genhtml_branch_coverage=1 00:19:36.600 --rc genhtml_function_coverage=1 00:19:36.600 --rc genhtml_legend=1 00:19:36.600 --rc geninfo_all_blocks=1 00:19:36.600 --rc geninfo_unexecuted_blocks=1 00:19:36.600 00:19:36.600 ' 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:36.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:36.600 --rc genhtml_branch_coverage=1 00:19:36.600 --rc genhtml_function_coverage=1 00:19:36.600 --rc genhtml_legend=1 00:19:36.600 --rc geninfo_all_blocks=1 00:19:36.600 --rc geninfo_unexecuted_blocks=1 00:19:36.600 00:19:36.600 ' 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:36.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:36.600 --rc genhtml_branch_coverage=1 00:19:36.600 --rc genhtml_function_coverage=1 00:19:36.600 --rc genhtml_legend=1 00:19:36.600 --rc geninfo_all_blocks=1 00:19:36.600 --rc geninfo_unexecuted_blocks=1 00:19:36.600 00:19:36.600 ' 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:36.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:36.600 --rc genhtml_branch_coverage=1 00:19:36.600 --rc genhtml_function_coverage=1 00:19:36.600 --rc genhtml_legend=1 00:19:36.600 --rc geninfo_all_blocks=1 00:19:36.600 --rc geninfo_unexecuted_blocks=1 00:19:36.600 00:19:36.600 ' 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:36.600 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:36.601 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:36.601 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:36.601 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:36.601 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:36.601 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:36.601 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:36.601 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:36.601 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:36.601 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:19:36.601 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:19:36.601 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:36.601 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:36.601 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:36.601 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:36.601 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:36.601 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:19:36.858 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:36.858 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:36.858 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:36.858 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:36.859 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:36.859 Cannot find device "nvmf_init_br" 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # true 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:36.859 Cannot find device "nvmf_init_br2" 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # true 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:36.859 Cannot find device "nvmf_tgt_br" 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@164 -- # true 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:36.859 Cannot find device "nvmf_tgt_br2" 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@165 -- # true 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:36.859 Cannot find device "nvmf_init_br" 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@166 -- # true 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:36.859 Cannot find device "nvmf_init_br2" 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@167 -- # true 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:36.859 Cannot find device "nvmf_tgt_br" 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@168 -- # true 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:36.859 Cannot find device "nvmf_tgt_br2" 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # true 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:36.859 Cannot find device "nvmf_br" 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@170 -- # true 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:36.859 Cannot find device "nvmf_init_if" 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # true 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:36.859 Cannot find device "nvmf_init_if2" 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # true 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:36.859 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@173 -- # true 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:36.859 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@174 -- # true 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:36.859 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:36.860 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:36.860 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:36.860 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:36.860 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:36.860 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:36.860 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:36.860 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:36.860 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:36.860 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:36.860 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:37.117 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:37.117 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:37.117 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:37.117 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:37.117 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:37.117 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:37.117 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:37.117 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:37.117 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:37.117 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:37.117 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:37.117 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:37.117 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:37.117 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:37.117 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:19:37.117 00:19:37.117 --- 10.0.0.3 ping statistics --- 00:19:37.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.117 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:19:37.117 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:37.117 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:37.117 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:19:37.117 00:19:37.117 --- 10.0.0.4 ping statistics --- 00:19:37.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.117 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:19:37.117 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:37.117 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:37.117 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:19:37.117 00:19:37.117 --- 10.0.0.1 ping statistics --- 00:19:37.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.117 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:19:37.117 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:37.117 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:37.117 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.037 ms 00:19:37.117 00:19:37.117 --- 10.0.0.2 ping statistics --- 00:19:37.117 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.117 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:19:37.117 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:37.117 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@461 -- # return 0 00:19:37.117 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:37.117 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:37.117 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:37.117 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:37.117 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:37.117 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:37.117 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:37.118 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:19:37.118 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:19:37.118 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:37.118 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:19:37.118 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:19:37.118 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:19:37.118 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=70429 00:19:37.118 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:37.118 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 70429 00:19:37.118 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:19:37.118 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 70429 ']' 00:19:37.118 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.118 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:37.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:37.118 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.118 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:37.118 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:19:38.140 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:38.140 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:19:38.140 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:19:38.140 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:38.140 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:19:38.140 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:38.140 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.140 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:19:38.140 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.140 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:19:38.140 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.140 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:19:38.140 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.140 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:19:38.140 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:38.140 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.140 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:19:38.140 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.140 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:19:38.140 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:38.140 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.140 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:19:38.140 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.140 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:38.140 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.140 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:19:38.140 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.140 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:19:38.140 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:50.336 Initializing NVMe Controllers 00:19:50.336 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:19:50.336 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:50.336 Initialization complete. Launching workers. 00:19:50.336 ======================================================== 00:19:50.336 Latency(us) 00:19:50.336 Device Information : IOPS MiB/s Average min max 00:19:50.336 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17433.52 68.10 3670.71 566.47 21701.40 00:19:50.336 ======================================================== 00:19:50.336 Total : 17433.52 68.10 3670.71 566.47 21701.40 00:19:50.336 00:19:50.336 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:19:50.336 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:19:50.336 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:50.336 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:19:50.336 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:50.336 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:19:50.336 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:50.336 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:50.336 rmmod nvme_tcp 00:19:50.336 rmmod nvme_fabrics 00:19:50.336 rmmod nvme_keyring 00:19:50.336 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:50.336 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:19:50.336 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:19:50.336 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 70429 ']' 00:19:50.336 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 70429 00:19:50.336 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 70429 ']' 00:19:50.336 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 70429 00:19:50.336 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:19:50.336 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:50.336 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70429 00:19:50.336 killing process with pid 70429 00:19:50.336 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:19:50.336 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:19:50.336 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70429' 00:19:50.336 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 70429 00:19:50.336 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 70429 00:19:50.336 nvmf threads initialize successfully 00:19:50.336 bdev subsystem init successfully 00:19:50.336 created a nvmf target service 00:19:50.336 create targets's poll groups done 00:19:50.336 all subsystems of target started 00:19:50.336 nvmf target is running 00:19:50.336 all subsystems of target stopped 00:19:50.336 destroy targets's poll groups done 00:19:50.336 destroyed the nvmf target service 00:19:50.336 bdev subsystem finish successfully 00:19:50.336 nvmf threads destroy successfully 00:19:50.336 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:50.336 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:50.336 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:50.336 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:19:50.336 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:19:50.336 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:50.336 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:19:50.336 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:50.336 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:50.336 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@300 -- # return 0 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:19:50.337 00:19:50.337 real 0m12.189s 00:19:50.337 user 0m44.098s 00:19:50.337 sys 0m1.406s 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:19:50.337 ************************************ 00:19:50.337 END TEST nvmf_example 00:19:50.337 ************************************ 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:50.337 ************************************ 00:19:50.337 START TEST nvmf_filesystem 00:19:50.337 ************************************ 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:19:50.337 * Looking for test storage... 00:19:50.337 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:50.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.337 --rc genhtml_branch_coverage=1 00:19:50.337 --rc genhtml_function_coverage=1 00:19:50.337 --rc genhtml_legend=1 00:19:50.337 --rc geninfo_all_blocks=1 00:19:50.337 --rc geninfo_unexecuted_blocks=1 00:19:50.337 00:19:50.337 ' 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:50.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.337 --rc genhtml_branch_coverage=1 00:19:50.337 --rc genhtml_function_coverage=1 00:19:50.337 --rc genhtml_legend=1 00:19:50.337 --rc geninfo_all_blocks=1 00:19:50.337 --rc geninfo_unexecuted_blocks=1 00:19:50.337 00:19:50.337 ' 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:50.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.337 --rc genhtml_branch_coverage=1 00:19:50.337 --rc genhtml_function_coverage=1 00:19:50.337 --rc genhtml_legend=1 00:19:50.337 --rc geninfo_all_blocks=1 00:19:50.337 --rc geninfo_unexecuted_blocks=1 00:19:50.337 00:19:50.337 ' 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:50.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.337 --rc genhtml_branch_coverage=1 00:19:50.337 --rc genhtml_function_coverage=1 00:19:50.337 --rc genhtml_legend=1 00:19:50.337 --rc geninfo_all_blocks=1 00:19:50.337 --rc geninfo_unexecuted_blocks=1 00:19:50.337 00:19:50.337 ' 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:19:50.337 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=y 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=y 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:19:50.338 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:19:50.338 #define SPDK_CONFIG_H 00:19:50.338 #define SPDK_CONFIG_AIO_FSDEV 1 00:19:50.338 #define SPDK_CONFIG_APPS 1 00:19:50.338 #define SPDK_CONFIG_ARCH native 00:19:50.338 #undef SPDK_CONFIG_ASAN 00:19:50.338 #define SPDK_CONFIG_AVAHI 1 00:19:50.338 #undef SPDK_CONFIG_CET 00:19:50.338 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:19:50.338 #define SPDK_CONFIG_COVERAGE 1 00:19:50.338 #define SPDK_CONFIG_CROSS_PREFIX 00:19:50.338 #undef SPDK_CONFIG_CRYPTO 00:19:50.338 #undef SPDK_CONFIG_CRYPTO_MLX5 00:19:50.338 #undef SPDK_CONFIG_CUSTOMOCF 00:19:50.338 #undef SPDK_CONFIG_DAOS 00:19:50.338 #define SPDK_CONFIG_DAOS_DIR 00:19:50.338 #define SPDK_CONFIG_DEBUG 1 00:19:50.338 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:19:50.338 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:19:50.338 #define SPDK_CONFIG_DPDK_INC_DIR 00:19:50.338 #define SPDK_CONFIG_DPDK_LIB_DIR 00:19:50.338 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:19:50.338 #undef SPDK_CONFIG_DPDK_UADK 00:19:50.338 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:19:50.338 #define SPDK_CONFIG_EXAMPLES 1 00:19:50.338 #undef SPDK_CONFIG_FC 00:19:50.338 #define SPDK_CONFIG_FC_PATH 00:19:50.338 #define SPDK_CONFIG_FIO_PLUGIN 1 00:19:50.338 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:19:50.338 #define SPDK_CONFIG_FSDEV 1 00:19:50.338 #undef SPDK_CONFIG_FUSE 00:19:50.338 #undef SPDK_CONFIG_FUZZER 00:19:50.338 #define SPDK_CONFIG_FUZZER_LIB 00:19:50.338 #define SPDK_CONFIG_GOLANG 1 00:19:50.338 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:19:50.338 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:19:50.338 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:19:50.338 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:19:50.338 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:19:50.338 #undef SPDK_CONFIG_HAVE_LIBBSD 00:19:50.338 #undef SPDK_CONFIG_HAVE_LZ4 00:19:50.338 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:19:50.338 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:19:50.338 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:19:50.338 #define SPDK_CONFIG_IDXD 1 00:19:50.338 #define SPDK_CONFIG_IDXD_KERNEL 1 00:19:50.338 #undef SPDK_CONFIG_IPSEC_MB 00:19:50.338 #define SPDK_CONFIG_IPSEC_MB_DIR 00:19:50.338 #define SPDK_CONFIG_ISAL 1 00:19:50.338 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:19:50.338 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:19:50.338 #define SPDK_CONFIG_LIBDIR 00:19:50.338 #undef SPDK_CONFIG_LTO 00:19:50.338 #define SPDK_CONFIG_MAX_LCORES 128 00:19:50.338 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:19:50.338 #define SPDK_CONFIG_NVME_CUSE 1 00:19:50.338 #undef SPDK_CONFIG_OCF 00:19:50.338 #define SPDK_CONFIG_OCF_PATH 00:19:50.338 #define SPDK_CONFIG_OPENSSL_PATH 00:19:50.338 #undef SPDK_CONFIG_PGO_CAPTURE 00:19:50.338 #define SPDK_CONFIG_PGO_DIR 00:19:50.338 #undef SPDK_CONFIG_PGO_USE 00:19:50.338 #define SPDK_CONFIG_PREFIX /usr/local 00:19:50.338 #undef SPDK_CONFIG_RAID5F 00:19:50.338 #undef SPDK_CONFIG_RBD 00:19:50.338 #define SPDK_CONFIG_RDMA 1 00:19:50.338 #define SPDK_CONFIG_RDMA_PROV verbs 00:19:50.338 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:19:50.338 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:19:50.338 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:19:50.338 #define SPDK_CONFIG_SHARED 1 00:19:50.339 #undef SPDK_CONFIG_SMA 00:19:50.339 #define SPDK_CONFIG_TESTS 1 00:19:50.339 #undef SPDK_CONFIG_TSAN 00:19:50.339 #define SPDK_CONFIG_UBLK 1 00:19:50.339 #define SPDK_CONFIG_UBSAN 1 00:19:50.339 #undef SPDK_CONFIG_UNIT_TESTS 00:19:50.339 #undef SPDK_CONFIG_URING 00:19:50.339 #define SPDK_CONFIG_URING_PATH 00:19:50.339 #undef SPDK_CONFIG_URING_ZNS 00:19:50.339 #define SPDK_CONFIG_USDT 1 00:19:50.339 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:19:50.339 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:19:50.339 #undef SPDK_CONFIG_VFIO_USER 00:19:50.339 #define SPDK_CONFIG_VFIO_USER_DIR 00:19:50.339 #define SPDK_CONFIG_VHOST 1 00:19:50.339 #define SPDK_CONFIG_VIRTIO 1 00:19:50.339 #undef SPDK_CONFIG_VTUNE 00:19:50.339 #define SPDK_CONFIG_VTUNE_DIR 00:19:50.339 #define SPDK_CONFIG_WERROR 1 00:19:50.339 #define SPDK_CONFIG_WPDK_DIR 00:19:50.339 #undef SPDK_CONFIG_XNVME 00:19:50.339 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:19:50.339 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 1 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 1 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 1 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:19:50.340 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 70702 ]] 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 70702 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.T7z2nG 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.T7z2nG/tests/target /tmp/spdk.T7z2nG 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=13979553792 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5589872640 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6255357952 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265389056 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=2486018048 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=20140032 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=13979553792 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5589872640 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6265253888 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=139264 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora39-libvirt/output 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=92475019264 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=7227760640 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:19:50.341 * Looking for test storage... 00:19:50.341 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/home 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=13979553792 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:50.342 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:50.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.342 --rc genhtml_branch_coverage=1 00:19:50.342 --rc genhtml_function_coverage=1 00:19:50.342 --rc genhtml_legend=1 00:19:50.342 --rc geninfo_all_blocks=1 00:19:50.342 --rc geninfo_unexecuted_blocks=1 00:19:50.342 00:19:50.342 ' 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:50.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.342 --rc genhtml_branch_coverage=1 00:19:50.342 --rc genhtml_function_coverage=1 00:19:50.342 --rc genhtml_legend=1 00:19:50.342 --rc geninfo_all_blocks=1 00:19:50.342 --rc geninfo_unexecuted_blocks=1 00:19:50.342 00:19:50.342 ' 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:50.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.342 --rc genhtml_branch_coverage=1 00:19:50.342 --rc genhtml_function_coverage=1 00:19:50.342 --rc genhtml_legend=1 00:19:50.342 --rc geninfo_all_blocks=1 00:19:50.342 --rc geninfo_unexecuted_blocks=1 00:19:50.342 00:19:50.342 ' 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:50.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.342 --rc genhtml_branch_coverage=1 00:19:50.342 --rc genhtml_function_coverage=1 00:19:50.342 --rc genhtml_legend=1 00:19:50.342 --rc geninfo_all_blocks=1 00:19:50.342 --rc geninfo_unexecuted_blocks=1 00:19:50.342 00:19:50.342 ' 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.342 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:50.343 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:50.343 Cannot find device "nvmf_init_br" 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # true 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:50.343 Cannot find device "nvmf_init_br2" 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # true 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:50.343 Cannot find device "nvmf_tgt_br" 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@164 -- # true 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:50.343 Cannot find device "nvmf_tgt_br2" 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@165 -- # true 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:50.343 Cannot find device "nvmf_init_br" 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@166 -- # true 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:50.343 Cannot find device "nvmf_init_br2" 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@167 -- # true 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:50.343 Cannot find device "nvmf_tgt_br" 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@168 -- # true 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:50.343 Cannot find device "nvmf_tgt_br2" 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # true 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:50.343 Cannot find device "nvmf_br" 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@170 -- # true 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:50.343 Cannot find device "nvmf_init_if" 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # true 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:50.343 Cannot find device "nvmf_init_if2" 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # true 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:50.343 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@173 -- # true 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:50.343 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@174 -- # true 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:50.343 12:18:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:50.343 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:50.343 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:50.343 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:50.343 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:50.343 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:50.343 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:50.343 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:50.343 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:50.343 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:50.343 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:50.343 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:50.343 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:50.343 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:50.343 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:50.343 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:50.343 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:50.343 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:50.343 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:50.343 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:50.343 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:50.343 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:50.343 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:50.343 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:50.343 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:50.343 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:50.343 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:50.343 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:50.343 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:50.343 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:19:50.343 00:19:50.343 --- 10.0.0.3 ping statistics --- 00:19:50.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:50.343 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:19:50.343 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:50.343 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:50.343 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:19:50.343 00:19:50.343 --- 10.0.0.4 ping statistics --- 00:19:50.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:50.343 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:19:50.343 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:50.343 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:50.343 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:19:50.343 00:19:50.343 --- 10.0.0.1 ping statistics --- 00:19:50.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:50.343 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:19:50.343 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:50.343 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:50.343 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:19:50.343 00:19:50.343 --- 10.0.0.2 ping statistics --- 00:19:50.343 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:50.343 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:19:50.343 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:50.343 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@461 -- # return 0 00:19:50.343 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:50.343 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:50.344 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:50.344 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:50.344 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:50.344 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:50.344 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:50.344 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:19:50.344 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:50.344 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:50.344 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:19:50.344 ************************************ 00:19:50.344 START TEST nvmf_filesystem_no_in_capsule 00:19:50.344 ************************************ 00:19:50.344 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:19:50.344 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:19:50.344 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:19:50.344 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:50.344 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:50.344 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:19:50.344 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=70890 00:19:50.344 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 70890 00:19:50.344 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 70890 ']' 00:19:50.344 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:50.344 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:50.344 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.344 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:50.344 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:19:50.344 12:18:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:50.344 [2024-11-25 12:18:45.201796] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:19:50.344 [2024-11-25 12:18:45.201853] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:50.344 [2024-11-25 12:18:45.342961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:50.344 [2024-11-25 12:18:45.379232] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:50.344 [2024-11-25 12:18:45.379263] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:50.344 [2024-11-25 12:18:45.379270] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:50.344 [2024-11-25 12:18:45.379275] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:50.344 [2024-11-25 12:18:45.379279] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:50.344 [2024-11-25 12:18:45.379922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:50.344 [2024-11-25 12:18:45.380335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:50.344 [2024-11-25 12:18:45.380609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:50.344 [2024-11-25 12:18:45.380743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.344 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:50.344 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:19:50.344 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:50.344 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:50.344 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:19:50.344 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:50.344 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:19:50.344 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:19:50.344 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.344 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:19:50.344 [2024-11-25 12:18:46.107948] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:50.344 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.344 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:19:50.344 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.344 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:19:50.344 Malloc1 00:19:50.344 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.602 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:50.602 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.602 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:19:50.602 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.603 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:50.603 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.603 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:19:50.603 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.603 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:50.603 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.603 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:19:50.603 [2024-11-25 12:18:46.223341] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:50.603 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.603 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:19:50.603 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:19:50.603 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:50.603 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:19:50.603 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:19:50.603 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:19:50.603 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.603 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:19:50.603 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.603 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:50.603 { 00:19:50.603 "aliases": [ 00:19:50.603 "7110281e-56bf-4f4b-a340-1fe0d117c0e4" 00:19:50.603 ], 00:19:50.603 "assigned_rate_limits": { 00:19:50.603 "r_mbytes_per_sec": 0, 00:19:50.603 "rw_ios_per_sec": 0, 00:19:50.603 "rw_mbytes_per_sec": 0, 00:19:50.603 "w_mbytes_per_sec": 0 00:19:50.603 }, 00:19:50.603 "block_size": 512, 00:19:50.603 "claim_type": "exclusive_write", 00:19:50.603 "claimed": true, 00:19:50.603 "driver_specific": {}, 00:19:50.603 "memory_domains": [ 00:19:50.603 { 00:19:50.603 "dma_device_id": "system", 00:19:50.603 "dma_device_type": 1 00:19:50.603 }, 00:19:50.603 { 00:19:50.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:50.603 "dma_device_type": 2 00:19:50.603 } 00:19:50.603 ], 00:19:50.603 "name": "Malloc1", 00:19:50.603 "num_blocks": 1048576, 00:19:50.603 "product_name": "Malloc disk", 00:19:50.603 "supported_io_types": { 00:19:50.603 "abort": true, 00:19:50.603 "compare": false, 00:19:50.603 "compare_and_write": false, 00:19:50.603 "copy": true, 00:19:50.603 "flush": true, 00:19:50.603 "get_zone_info": false, 00:19:50.603 "nvme_admin": false, 00:19:50.603 "nvme_io": false, 00:19:50.603 "nvme_io_md": false, 00:19:50.603 "nvme_iov_md": false, 00:19:50.603 "read": true, 00:19:50.603 "reset": true, 00:19:50.603 "seek_data": false, 00:19:50.603 "seek_hole": false, 00:19:50.603 "unmap": true, 00:19:50.603 "write": true, 00:19:50.603 "write_zeroes": true, 00:19:50.603 "zcopy": true, 00:19:50.603 "zone_append": false, 00:19:50.603 "zone_management": false 00:19:50.603 }, 00:19:50.603 "uuid": "7110281e-56bf-4f4b-a340-1fe0d117c0e4", 00:19:50.603 "zoned": false 00:19:50.603 } 00:19:50.603 ]' 00:19:50.603 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:50.603 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:19:50.603 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:50.603 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:19:50.603 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:19:50.603 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:19:50.603 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:19:50.603 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid=5934b511-67a7-4c73-84f0-52fdf36bb79d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:19:50.603 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:19:50.603 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:19:50.603 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:50.603 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:50.603 12:18:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:19:53.133 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:53.133 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:53.133 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:53.133 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:53.133 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:53.133 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:19:53.133 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:19:53.133 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:19:53.133 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:19:53.133 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:19:53.133 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:19:53.133 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:53.133 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:19:53.133 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:19:53.133 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:19:53.133 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:19:53.133 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:19:53.133 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:19:53.133 12:18:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:19:54.067 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:19:54.067 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:19:54.067 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:54.067 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:54.067 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:19:54.067 ************************************ 00:19:54.067 START TEST filesystem_ext4 00:19:54.067 ************************************ 00:19:54.067 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:19:54.067 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:19:54.067 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:19:54.067 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:19:54.067 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:19:54.067 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:19:54.067 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:19:54.067 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:19:54.067 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:19:54.067 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:19:54.067 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:19:54.067 mke2fs 1.47.0 (5-Feb-2023) 00:19:54.067 Discarding device blocks: 0/522240 done 00:19:54.067 Creating filesystem with 522240 1k blocks and 130560 inodes 00:19:54.067 Filesystem UUID: 4965db50-0640-4498-be28-b11870eca80c 00:19:54.067 Superblock backups stored on blocks: 00:19:54.067 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:19:54.067 00:19:54.067 Allocating group tables: 0/64 done 00:19:54.067 Writing inode tables: 0/64 done 00:19:54.067 Creating journal (8192 blocks): done 00:19:54.068 Writing superblocks and filesystem accounting information: 0/64 done 00:19:54.068 00:19:54.068 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:19:54.068 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:19:59.332 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:19:59.332 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:19:59.332 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:19:59.332 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:19:59.332 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:19:59.332 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:19:59.332 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 70890 00:19:59.332 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:19:59.332 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:19:59.332 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:19:59.332 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:19:59.332 00:19:59.332 real 0m5.503s 00:19:59.332 user 0m0.017s 00:19:59.332 sys 0m0.057s 00:19:59.332 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:59.332 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:19:59.332 ************************************ 00:19:59.332 END TEST filesystem_ext4 00:19:59.332 ************************************ 00:19:59.332 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:19:59.332 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:59.332 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:59.332 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:19:59.332 ************************************ 00:19:59.332 START TEST filesystem_btrfs 00:19:59.332 ************************************ 00:19:59.332 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:19:59.332 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:19:59.332 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:19:59.332 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:19:59.333 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:19:59.333 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:19:59.333 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:19:59.333 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:19:59.333 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:19:59.333 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:19:59.333 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:19:59.591 btrfs-progs v6.8.1 00:19:59.591 See https://btrfs.readthedocs.io for more information. 00:19:59.591 00:19:59.591 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:19:59.591 NOTE: several default settings have changed in version 5.15, please make sure 00:19:59.591 this does not affect your deployments: 00:19:59.591 - DUP for metadata (-m dup) 00:19:59.591 - enabled no-holes (-O no-holes) 00:19:59.591 - enabled free-space-tree (-R free-space-tree) 00:19:59.591 00:19:59.591 Label: (null) 00:19:59.591 UUID: 708fbc9d-4889-4ba1-851d-e7eb87714099 00:19:59.591 Node size: 16384 00:19:59.591 Sector size: 4096 (CPU page size: 4096) 00:19:59.591 Filesystem size: 510.00MiB 00:19:59.591 Block group profiles: 00:19:59.591 Data: single 8.00MiB 00:19:59.591 Metadata: DUP 32.00MiB 00:19:59.591 System: DUP 8.00MiB 00:19:59.591 SSD detected: yes 00:19:59.591 Zoned device: no 00:19:59.591 Features: extref, skinny-metadata, no-holes, free-space-tree 00:19:59.591 Checksum: crc32c 00:19:59.591 Number of devices: 1 00:19:59.591 Devices: 00:19:59.591 ID SIZE PATH 00:19:59.591 1 510.00MiB /dev/nvme0n1p1 00:19:59.591 00:19:59.592 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:19:59.592 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:19:59.592 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:19:59.592 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:19:59.915 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:19:59.915 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:19:59.915 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:19:59.915 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:19:59.915 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 70890 00:19:59.915 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:19:59.915 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:19:59.915 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:19:59.915 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:19:59.915 00:19:59.915 real 0m0.474s 00:19:59.915 user 0m0.012s 00:19:59.915 sys 0m0.054s 00:19:59.915 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:59.915 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:19:59.915 ************************************ 00:19:59.915 END TEST filesystem_btrfs 00:19:59.915 ************************************ 00:19:59.915 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:19:59.915 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:59.915 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:59.915 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:19:59.915 ************************************ 00:19:59.915 START TEST filesystem_xfs 00:19:59.915 ************************************ 00:19:59.915 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:19:59.915 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:19:59.915 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:19:59.915 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:19:59.915 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:19:59.915 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:19:59.915 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:19:59.915 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:19:59.915 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:19:59.915 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:19:59.915 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:19:59.915 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:19:59.915 = sectsz=512 attr=2, projid32bit=1 00:19:59.915 = crc=1 finobt=1, sparse=1, rmapbt=0 00:19:59.915 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:19:59.915 data = bsize=4096 blocks=130560, imaxpct=25 00:19:59.915 = sunit=0 swidth=0 blks 00:19:59.915 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:19:59.915 log =internal log bsize=4096 blocks=16384, version=2 00:19:59.915 = sectsz=512 sunit=0 blks, lazy-count=1 00:19:59.915 realtime =none extsz=4096 blocks=0, rtextents=0 00:20:00.864 Discarding blocks...Done. 00:20:00.864 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:20:00.864 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:20:02.761 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:20:02.761 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:20:02.761 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:20:02.761 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:20:02.761 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:20:02.761 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:20:02.761 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 70890 00:20:02.761 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:20:02.761 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:20:02.761 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:20:02.761 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:20:02.761 00:20:02.761 real 0m2.909s 00:20:02.761 user 0m0.016s 00:20:02.761 sys 0m0.036s 00:20:02.761 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:02.761 ************************************ 00:20:02.761 END TEST filesystem_xfs 00:20:02.761 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:20:02.761 ************************************ 00:20:02.761 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:20:02.761 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:20:02.761 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:03.019 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:03.019 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:03.019 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:20:03.019 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:20:03.019 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:03.019 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:20:03.019 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:03.019 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:20:03.019 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:03.019 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.019 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:20:03.019 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.019 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:20:03.019 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 70890 00:20:03.019 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 70890 ']' 00:20:03.019 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 70890 00:20:03.019 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:20:03.019 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:03.019 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70890 00:20:03.019 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:03.019 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:03.019 killing process with pid 70890 00:20:03.019 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70890' 00:20:03.019 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 70890 00:20:03.019 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 70890 00:20:03.277 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:20:03.277 00:20:03.277 real 0m13.741s 00:20:03.277 user 0m53.230s 00:20:03.277 sys 0m1.372s 00:20:03.277 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:03.277 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:20:03.277 ************************************ 00:20:03.277 END TEST nvmf_filesystem_no_in_capsule 00:20:03.277 ************************************ 00:20:03.277 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:20:03.277 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:03.277 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:03.277 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:20:03.277 ************************************ 00:20:03.277 START TEST nvmf_filesystem_in_capsule 00:20:03.277 ************************************ 00:20:03.277 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:20:03.277 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:20:03.277 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:20:03.277 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:03.277 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:03.277 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:20:03.277 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=71256 00:20:03.277 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 71256 00:20:03.277 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 71256 ']' 00:20:03.277 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:03.277 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.278 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:03.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.278 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.278 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:03.278 12:18:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:20:03.278 [2024-11-25 12:18:58.984046] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:20:03.278 [2024-11-25 12:18:58.984111] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:03.278 [2024-11-25 12:18:59.114476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:03.278 [2024-11-25 12:18:59.145851] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:03.278 [2024-11-25 12:18:59.145889] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:03.278 [2024-11-25 12:18:59.145895] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:03.278 [2024-11-25 12:18:59.145899] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:03.278 [2024-11-25 12:18:59.145902] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:03.278 [2024-11-25 12:18:59.146588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:03.278 [2024-11-25 12:18:59.146963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:03.278 [2024-11-25 12:18:59.147057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:03.278 [2024-11-25 12:18:59.147060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:04.208 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:04.208 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:20:04.208 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:04.208 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:04.208 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:20:04.208 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:04.208 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:20:04.208 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:20:04.208 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.208 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:20:04.208 [2024-11-25 12:18:59.877981] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:04.208 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.208 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:20:04.208 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.208 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:20:04.208 Malloc1 00:20:04.208 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.208 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:04.208 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.208 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:20:04.208 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.209 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:04.209 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.209 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:20:04.209 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.209 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:04.209 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.209 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:20:04.209 [2024-11-25 12:18:59.983769] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:04.209 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.209 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:20:04.209 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:20:04.209 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:04.209 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:20:04.209 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:20:04.209 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:20:04.209 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.209 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:20:04.209 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.209 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:04.209 { 00:20:04.209 "aliases": [ 00:20:04.209 "a2c68286-f407-454d-8fa4-4fc842175f40" 00:20:04.209 ], 00:20:04.209 "assigned_rate_limits": { 00:20:04.209 "r_mbytes_per_sec": 0, 00:20:04.209 "rw_ios_per_sec": 0, 00:20:04.209 "rw_mbytes_per_sec": 0, 00:20:04.209 "w_mbytes_per_sec": 0 00:20:04.209 }, 00:20:04.209 "block_size": 512, 00:20:04.209 "claim_type": "exclusive_write", 00:20:04.209 "claimed": true, 00:20:04.209 "driver_specific": {}, 00:20:04.209 "memory_domains": [ 00:20:04.209 { 00:20:04.209 "dma_device_id": "system", 00:20:04.209 "dma_device_type": 1 00:20:04.209 }, 00:20:04.209 { 00:20:04.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:04.209 "dma_device_type": 2 00:20:04.209 } 00:20:04.209 ], 00:20:04.209 "name": "Malloc1", 00:20:04.209 "num_blocks": 1048576, 00:20:04.209 "product_name": "Malloc disk", 00:20:04.209 "supported_io_types": { 00:20:04.209 "abort": true, 00:20:04.209 "compare": false, 00:20:04.209 "compare_and_write": false, 00:20:04.209 "copy": true, 00:20:04.209 "flush": true, 00:20:04.209 "get_zone_info": false, 00:20:04.209 "nvme_admin": false, 00:20:04.209 "nvme_io": false, 00:20:04.209 "nvme_io_md": false, 00:20:04.209 "nvme_iov_md": false, 00:20:04.209 "read": true, 00:20:04.209 "reset": true, 00:20:04.209 "seek_data": false, 00:20:04.209 "seek_hole": false, 00:20:04.209 "unmap": true, 00:20:04.209 "write": true, 00:20:04.209 "write_zeroes": true, 00:20:04.209 "zcopy": true, 00:20:04.209 "zone_append": false, 00:20:04.209 "zone_management": false 00:20:04.209 }, 00:20:04.209 "uuid": "a2c68286-f407-454d-8fa4-4fc842175f40", 00:20:04.209 "zoned": false 00:20:04.209 } 00:20:04.209 ]' 00:20:04.209 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:04.209 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:20:04.209 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:04.209 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:20:04.209 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:20:04.209 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:20:04.209 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:20:04.209 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid=5934b511-67a7-4c73-84f0-52fdf36bb79d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:20:04.467 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:20:04.467 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:20:04.467 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:20:04.467 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:20:04.467 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:20:06.366 12:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:20:06.366 12:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:20:06.366 12:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:20:06.625 12:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:20:06.625 12:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:20:06.625 12:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:20:06.625 12:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:20:06.625 12:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:20:06.625 12:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:20:06.625 12:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:20:06.625 12:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:20:06.625 12:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:06.625 12:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:20:06.625 12:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:20:06.625 12:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:20:06.625 12:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:20:06.625 12:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:20:06.625 12:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:20:06.625 12:19:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:20:07.557 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:20:07.557 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:20:07.557 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:07.557 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:07.557 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:20:07.557 ************************************ 00:20:07.557 START TEST filesystem_in_capsule_ext4 00:20:07.557 ************************************ 00:20:07.557 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:20:07.557 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:20:07.557 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:20:07.557 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:20:07.557 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:20:07.557 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:20:07.557 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:20:07.557 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:20:07.558 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:20:07.558 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:20:07.558 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:20:07.558 mke2fs 1.47.0 (5-Feb-2023) 00:20:07.558 Discarding device blocks: 0/522240 done 00:20:07.558 Creating filesystem with 522240 1k blocks and 130560 inodes 00:20:07.558 Filesystem UUID: 1151bcce-9dab-43b0-8f14-4c0dffde8fb0 00:20:07.558 Superblock backups stored on blocks: 00:20:07.558 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:20:07.558 00:20:07.558 Allocating group tables: 0/64 done 00:20:07.558 Writing inode tables: 0/64 done 00:20:07.558 Creating journal (8192 blocks): done 00:20:07.558 Writing superblocks and filesystem accounting information: 0/64 done 00:20:07.558 00:20:07.558 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:20:07.558 12:19:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:20:14.216 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:20:14.216 12:19:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:20:14.216 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:20:14.216 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:20:14.781 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:20:14.781 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:20:14.781 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 71256 00:20:14.781 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:20:14.781 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:20:14.781 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:20:14.781 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:20:14.781 ************************************ 00:20:14.781 END TEST filesystem_in_capsule_ext4 00:20:14.781 ************************************ 00:20:14.781 00:20:14.781 real 0m7.134s 00:20:14.781 user 0m0.020s 00:20:14.781 sys 0m0.053s 00:20:14.781 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:14.781 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:20:14.781 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:20:14.781 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:14.781 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:14.781 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:20:14.781 ************************************ 00:20:14.782 START TEST filesystem_in_capsule_btrfs 00:20:14.782 ************************************ 00:20:14.782 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:20:14.782 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:20:14.782 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:20:14.782 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:20:14.782 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:20:14.782 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:20:14.782 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:20:14.782 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:20:14.782 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:20:14.782 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:20:14.782 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:20:14.782 btrfs-progs v6.8.1 00:20:14.782 See https://btrfs.readthedocs.io for more information. 00:20:14.782 00:20:14.782 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:20:14.782 NOTE: several default settings have changed in version 5.15, please make sure 00:20:14.782 this does not affect your deployments: 00:20:14.782 - DUP for metadata (-m dup) 00:20:14.782 - enabled no-holes (-O no-holes) 00:20:14.782 - enabled free-space-tree (-R free-space-tree) 00:20:14.782 00:20:14.782 Label: (null) 00:20:14.782 UUID: 9ce99d76-f380-4045-aeec-b300c8db434f 00:20:14.782 Node size: 16384 00:20:14.782 Sector size: 4096 (CPU page size: 4096) 00:20:14.782 Filesystem size: 510.00MiB 00:20:14.782 Block group profiles: 00:20:14.782 Data: single 8.00MiB 00:20:14.782 Metadata: DUP 32.00MiB 00:20:14.782 System: DUP 8.00MiB 00:20:14.782 SSD detected: yes 00:20:14.782 Zoned device: no 00:20:14.782 Features: extref, skinny-metadata, no-holes, free-space-tree 00:20:14.782 Checksum: crc32c 00:20:14.782 Number of devices: 1 00:20:14.782 Devices: 00:20:14.782 ID SIZE PATH 00:20:14.782 1 510.00MiB /dev/nvme0n1p1 00:20:14.782 00:20:14.782 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:20:14.782 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:20:14.782 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:20:14.782 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:20:15.715 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:20:15.715 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:20:16.281 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:20:16.281 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:20:16.281 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 71256 00:20:16.281 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:20:16.281 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:20:16.281 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:20:16.281 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:20:16.281 ************************************ 00:20:16.281 END TEST filesystem_in_capsule_btrfs 00:20:16.281 ************************************ 00:20:16.281 00:20:16.281 real 0m1.551s 00:20:16.281 user 0m0.015s 00:20:16.281 sys 0m0.057s 00:20:16.281 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:16.281 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:20:16.281 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:20:16.281 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:16.281 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:16.281 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:20:16.281 ************************************ 00:20:16.281 START TEST filesystem_in_capsule_xfs 00:20:16.281 ************************************ 00:20:16.281 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:20:16.281 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:20:16.281 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:20:16.281 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:20:16.281 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:20:16.281 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:20:16.281 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:20:16.281 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:20:16.281 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:20:16.281 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:20:16.281 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:20:16.281 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:20:16.281 = sectsz=512 attr=2, projid32bit=1 00:20:16.281 = crc=1 finobt=1, sparse=1, rmapbt=0 00:20:16.281 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:20:16.281 data = bsize=4096 blocks=130560, imaxpct=25 00:20:16.281 = sunit=0 swidth=0 blks 00:20:16.281 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:20:16.281 log =internal log bsize=4096 blocks=16384, version=2 00:20:16.281 = sectsz=512 sunit=0 blks, lazy-count=1 00:20:16.281 realtime =none extsz=4096 blocks=0, rtextents=0 00:20:17.215 Discarding blocks...Done. 00:20:17.215 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:20:17.215 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:20:19.114 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:20:19.114 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:20:19.114 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:20:19.114 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:20:19.114 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:20:19.114 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:20:19.114 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 71256 00:20:19.114 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:20:19.114 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:20:19.114 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:20:19.114 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:20:19.114 ************************************ 00:20:19.114 END TEST filesystem_in_capsule_xfs 00:20:19.114 ************************************ 00:20:19.114 00:20:19.114 real 0m2.549s 00:20:19.114 user 0m0.017s 00:20:19.114 sys 0m0.038s 00:20:19.114 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:19.114 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:20:19.114 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:20:19.114 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:20:19.114 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:19.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:19.114 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:19.114 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:20:19.114 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:20:19.114 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:19.114 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:20:19.114 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:19.114 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:20:19.114 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:19.114 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.114 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:20:19.114 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.114 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:20:19.114 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 71256 00:20:19.114 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 71256 ']' 00:20:19.114 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 71256 00:20:19.114 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:20:19.114 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:19.114 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71256 00:20:19.114 killing process with pid 71256 00:20:19.114 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:19.114 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:19.114 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71256' 00:20:19.114 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 71256 00:20:19.114 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 71256 00:20:19.114 ************************************ 00:20:19.114 END TEST nvmf_filesystem_in_capsule 00:20:19.114 ************************************ 00:20:19.114 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:20:19.114 00:20:19.114 real 0m16.026s 00:20:19.114 user 1m2.367s 00:20:19.114 sys 0m1.457s 00:20:19.114 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:19.114 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:20:19.372 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:20:19.372 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:19.372 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:20:19.372 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:19.372 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:20:19.372 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:19.372 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:19.372 rmmod nvme_tcp 00:20:19.372 rmmod nvme_fabrics 00:20:19.372 rmmod nvme_keyring 00:20:19.372 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:19.372 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:20:19.372 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:20:19.372 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:20:19.372 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:19.372 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:19.372 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:19.372 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:20:19.372 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:19.372 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:20:19.372 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:20:19.372 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:19.372 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:19.372 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:19.372 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:19.372 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:19.372 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:19.372 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:19.372 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:19.372 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:19.372 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:19.372 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:19.372 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:19.630 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:19.630 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:19.630 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:19.630 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:19.630 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:19.630 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:19.630 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:19.630 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@300 -- # return 0 00:20:19.630 00:20:19.630 real 0m30.796s 00:20:19.630 user 1m55.931s 00:20:19.630 sys 0m3.241s 00:20:19.630 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:19.630 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:20:19.630 ************************************ 00:20:19.630 END TEST nvmf_filesystem 00:20:19.630 ************************************ 00:20:19.630 12:19:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:20:19.630 12:19:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:19.630 12:19:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:19.630 12:19:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:19.630 ************************************ 00:20:19.630 START TEST nvmf_target_discovery 00:20:19.630 ************************************ 00:20:19.630 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:20:19.630 * Looking for test storage... 00:20:19.630 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:19.630 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:19.630 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:19.630 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:20:19.630 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:19.631 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:19.631 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:19.631 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:19.631 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:20:19.631 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:20:19.631 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:20:19.631 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:20:19.631 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:20:19.631 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:20:19.631 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:20:19.631 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:19.631 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:20:19.631 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:20:19.631 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:19.631 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:19.631 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:20:19.631 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:20:19.631 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:19.631 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:20:19.631 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:20:19.631 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:20:19.631 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:20:19.631 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:19.631 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:20:19.631 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:20:19.631 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:19.631 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:19.631 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:20:19.631 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:19.631 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:19.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.631 --rc genhtml_branch_coverage=1 00:20:19.631 --rc genhtml_function_coverage=1 00:20:19.631 --rc genhtml_legend=1 00:20:19.631 --rc geninfo_all_blocks=1 00:20:19.631 --rc geninfo_unexecuted_blocks=1 00:20:19.631 00:20:19.631 ' 00:20:19.631 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:19.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.631 --rc genhtml_branch_coverage=1 00:20:19.631 --rc genhtml_function_coverage=1 00:20:19.631 --rc genhtml_legend=1 00:20:19.631 --rc geninfo_all_blocks=1 00:20:19.631 --rc geninfo_unexecuted_blocks=1 00:20:19.631 00:20:19.631 ' 00:20:19.631 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:19.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.631 --rc genhtml_branch_coverage=1 00:20:19.631 --rc genhtml_function_coverage=1 00:20:19.631 --rc genhtml_legend=1 00:20:19.631 --rc geninfo_all_blocks=1 00:20:19.631 --rc geninfo_unexecuted_blocks=1 00:20:19.631 00:20:19.631 ' 00:20:19.631 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:19.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.631 --rc genhtml_branch_coverage=1 00:20:19.631 --rc genhtml_function_coverage=1 00:20:19.631 --rc genhtml_legend=1 00:20:19.631 --rc geninfo_all_blocks=1 00:20:19.631 --rc geninfo_unexecuted_blocks=1 00:20:19.631 00:20:19.631 ' 00:20:19.631 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:19.631 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:20:19.631 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:19.631 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:19.631 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:19.631 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:19.631 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:19.631 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:19.631 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:19.631 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:19.631 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:19.631 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:19.890 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:20:19.890 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:20:19.890 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:19.890 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:19.890 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:19.890 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:19.890 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:19.890 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:20:19.890 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:19.890 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:19.890 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:19.890 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.890 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.890 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.890 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:19.891 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:19.891 Cannot find device "nvmf_init_br" 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # true 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:19.891 Cannot find device "nvmf_init_br2" 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # true 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:19.891 Cannot find device "nvmf_tgt_br" 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@164 -- # true 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:19.891 Cannot find device "nvmf_tgt_br2" 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@165 -- # true 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:19.891 Cannot find device "nvmf_init_br" 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@166 -- # true 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:19.891 Cannot find device "nvmf_init_br2" 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@167 -- # true 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:19.891 Cannot find device "nvmf_tgt_br" 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@168 -- # true 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:19.891 Cannot find device "nvmf_tgt_br2" 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # true 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:19.891 Cannot find device "nvmf_br" 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@170 -- # true 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:19.891 Cannot find device "nvmf_init_if" 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # true 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:19.891 Cannot find device "nvmf_init_if2" 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # true 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:19.891 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@173 -- # true 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:19.891 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@174 -- # true 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:19.891 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:19.892 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:19.892 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:19.892 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:19.892 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:19.892 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:19.892 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:19.892 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:19.892 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:19.892 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:19.892 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:19.892 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:19.892 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:19.892 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:19.892 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:19.892 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:19.892 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:19.892 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:19.892 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:20.150 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:20.150 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:20.150 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:20.150 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:20.150 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:20.150 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:20.150 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:20.150 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:20.150 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:20:20.150 00:20:20.150 --- 10.0.0.3 ping statistics --- 00:20:20.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:20.150 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:20:20.150 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:20.150 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:20.150 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:20:20.150 00:20:20.150 --- 10.0.0.4 ping statistics --- 00:20:20.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:20.150 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:20:20.150 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:20.151 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:20.151 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.014 ms 00:20:20.151 00:20:20.151 --- 10.0.0.1 ping statistics --- 00:20:20.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:20.151 rtt min/avg/max/mdev = 0.014/0.014/0.014/0.000 ms 00:20:20.151 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:20.151 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:20.151 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:20:20.151 00:20:20.151 --- 10.0.0.2 ping statistics --- 00:20:20.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:20.151 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:20:20.151 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:20.151 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@461 -- # return 0 00:20:20.151 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:20.151 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:20.151 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:20.151 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:20.151 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:20.151 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:20.151 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:20.151 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:20:20.151 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:20.151 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:20.151 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:20.151 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=71867 00:20:20.151 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 71867 00:20:20.151 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 71867 ']' 00:20:20.151 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:20.151 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:20.151 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:20.151 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:20.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:20.151 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:20.151 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:20.151 [2024-11-25 12:19:15.853732] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:20:20.151 [2024-11-25 12:19:15.853797] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:20.151 [2024-11-25 12:19:15.990439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:20.151 [2024-11-25 12:19:16.021386] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:20.151 [2024-11-25 12:19:16.021420] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:20.151 [2024-11-25 12:19:16.021426] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:20.151 [2024-11-25 12:19:16.021430] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:20.151 [2024-11-25 12:19:16.021434] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:20.151 [2024-11-25 12:19:16.022030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:20.151 [2024-11-25 12:19:16.022277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:20.151 [2024-11-25 12:19:16.022436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:20.408 [2024-11-25 12:19:16.022551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.972 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:20.972 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:20:20.972 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:20.972 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:20.972 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:20.972 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:20.972 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:20.972 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.972 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:20.972 [2024-11-25 12:19:16.764249] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:20.972 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.972 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:20:20.972 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:20:20.972 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:20:20.972 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.972 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:20.972 Null1 00:20:20.972 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.972 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:20.972 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.972 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:20.972 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.972 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:20:20.972 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.972 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:20.972 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.972 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:20.973 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.973 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:20.973 [2024-11-25 12:19:16.808350] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:20.973 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.973 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:20:20.973 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:20:20.973 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.973 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:20.973 Null2 00:20:20.973 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.973 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:20.973 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.973 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:20.973 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.973 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:20:20.973 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.973 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:20.973 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.973 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:20:20.973 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.973 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:20.973 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.973 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:20:20.973 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:20:20.973 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.973 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:21.231 Null3 00:20:21.231 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.231 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:20:21.231 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.231 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:21.231 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.231 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:20:21.231 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.231 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:21.231 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.231 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:20:21.231 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.231 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:21.231 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.231 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:20:21.231 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:20:21.231 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.231 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:21.231 Null4 00:20:21.231 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.231 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:20:21.231 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.231 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:21.231 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.231 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:20:21.231 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.231 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:21.231 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.231 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 00:20:21.231 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.231 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:21.231 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.231 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:20:21.231 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.231 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:21.231 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.231 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.3 -s 4430 00:20:21.231 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.231 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:21.231 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.231 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid=5934b511-67a7-4c73-84f0-52fdf36bb79d -t tcp -a 10.0.0.3 -s 4420 00:20:21.231 00:20:21.231 Discovery Log Number of Records 6, Generation counter 6 00:20:21.231 =====Discovery Log Entry 0====== 00:20:21.231 trtype: tcp 00:20:21.231 adrfam: ipv4 00:20:21.231 subtype: current discovery subsystem 00:20:21.231 treq: not required 00:20:21.231 portid: 0 00:20:21.231 trsvcid: 4420 00:20:21.231 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:21.231 traddr: 10.0.0.3 00:20:21.231 eflags: explicit discovery connections, duplicate discovery information 00:20:21.231 sectype: none 00:20:21.231 =====Discovery Log Entry 1====== 00:20:21.231 trtype: tcp 00:20:21.231 adrfam: ipv4 00:20:21.231 subtype: nvme subsystem 00:20:21.231 treq: not required 00:20:21.231 portid: 0 00:20:21.231 trsvcid: 4420 00:20:21.231 subnqn: nqn.2016-06.io.spdk:cnode1 00:20:21.231 traddr: 10.0.0.3 00:20:21.231 eflags: none 00:20:21.231 sectype: none 00:20:21.231 =====Discovery Log Entry 2====== 00:20:21.231 trtype: tcp 00:20:21.231 adrfam: ipv4 00:20:21.231 subtype: nvme subsystem 00:20:21.231 treq: not required 00:20:21.231 portid: 0 00:20:21.231 trsvcid: 4420 00:20:21.231 subnqn: nqn.2016-06.io.spdk:cnode2 00:20:21.231 traddr: 10.0.0.3 00:20:21.231 eflags: none 00:20:21.231 sectype: none 00:20:21.231 =====Discovery Log Entry 3====== 00:20:21.231 trtype: tcp 00:20:21.231 adrfam: ipv4 00:20:21.231 subtype: nvme subsystem 00:20:21.231 treq: not required 00:20:21.231 portid: 0 00:20:21.231 trsvcid: 4420 00:20:21.231 subnqn: nqn.2016-06.io.spdk:cnode3 00:20:21.231 traddr: 10.0.0.3 00:20:21.231 eflags: none 00:20:21.231 sectype: none 00:20:21.231 =====Discovery Log Entry 4====== 00:20:21.231 trtype: tcp 00:20:21.231 adrfam: ipv4 00:20:21.231 subtype: nvme subsystem 00:20:21.231 treq: not required 00:20:21.231 portid: 0 00:20:21.231 trsvcid: 4420 00:20:21.231 subnqn: nqn.2016-06.io.spdk:cnode4 00:20:21.231 traddr: 10.0.0.3 00:20:21.231 eflags: none 00:20:21.231 sectype: none 00:20:21.231 =====Discovery Log Entry 5====== 00:20:21.231 trtype: tcp 00:20:21.231 adrfam: ipv4 00:20:21.231 subtype: discovery subsystem referral 00:20:21.231 treq: not required 00:20:21.231 portid: 0 00:20:21.231 trsvcid: 4430 00:20:21.231 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:21.231 traddr: 10.0.0.3 00:20:21.231 eflags: none 00:20:21.231 sectype: none 00:20:21.231 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:20:21.231 Perform nvmf subsystem discovery via RPC 00:20:21.231 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:20:21.231 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.231 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:21.231 [ 00:20:21.231 { 00:20:21.231 "allow_any_host": true, 00:20:21.231 "hosts": [], 00:20:21.231 "listen_addresses": [ 00:20:21.231 { 00:20:21.231 "adrfam": "IPv4", 00:20:21.231 "traddr": "10.0.0.3", 00:20:21.231 "trsvcid": "4420", 00:20:21.231 "trtype": "TCP" 00:20:21.231 } 00:20:21.231 ], 00:20:21.231 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:21.231 "subtype": "Discovery" 00:20:21.231 }, 00:20:21.231 { 00:20:21.231 "allow_any_host": true, 00:20:21.231 "hosts": [], 00:20:21.231 "listen_addresses": [ 00:20:21.231 { 00:20:21.231 "adrfam": "IPv4", 00:20:21.231 "traddr": "10.0.0.3", 00:20:21.231 "trsvcid": "4420", 00:20:21.231 "trtype": "TCP" 00:20:21.231 } 00:20:21.231 ], 00:20:21.231 "max_cntlid": 65519, 00:20:21.231 "max_namespaces": 32, 00:20:21.231 "min_cntlid": 1, 00:20:21.231 "model_number": "SPDK bdev Controller", 00:20:21.231 "namespaces": [ 00:20:21.231 { 00:20:21.231 "bdev_name": "Null1", 00:20:21.231 "name": "Null1", 00:20:21.231 "nguid": "F291CA5F9D3F4BEA98E38065BE86C854", 00:20:21.231 "nsid": 1, 00:20:21.231 "uuid": "f291ca5f-9d3f-4bea-98e3-8065be86c854" 00:20:21.231 } 00:20:21.231 ], 00:20:21.231 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:21.231 "serial_number": "SPDK00000000000001", 00:20:21.231 "subtype": "NVMe" 00:20:21.231 }, 00:20:21.231 { 00:20:21.231 "allow_any_host": true, 00:20:21.232 "hosts": [], 00:20:21.232 "listen_addresses": [ 00:20:21.232 { 00:20:21.232 "adrfam": "IPv4", 00:20:21.232 "traddr": "10.0.0.3", 00:20:21.232 "trsvcid": "4420", 00:20:21.232 "trtype": "TCP" 00:20:21.232 } 00:20:21.232 ], 00:20:21.232 "max_cntlid": 65519, 00:20:21.232 "max_namespaces": 32, 00:20:21.232 "min_cntlid": 1, 00:20:21.232 "model_number": "SPDK bdev Controller", 00:20:21.232 "namespaces": [ 00:20:21.232 { 00:20:21.232 "bdev_name": "Null2", 00:20:21.232 "name": "Null2", 00:20:21.232 "nguid": "FC54DADC52AF436E8B330710106D8E5A", 00:20:21.232 "nsid": 1, 00:20:21.232 "uuid": "fc54dadc-52af-436e-8b33-0710106d8e5a" 00:20:21.232 } 00:20:21.232 ], 00:20:21.232 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:20:21.232 "serial_number": "SPDK00000000000002", 00:20:21.232 "subtype": "NVMe" 00:20:21.232 }, 00:20:21.232 { 00:20:21.232 "allow_any_host": true, 00:20:21.232 "hosts": [], 00:20:21.232 "listen_addresses": [ 00:20:21.232 { 00:20:21.232 "adrfam": "IPv4", 00:20:21.232 "traddr": "10.0.0.3", 00:20:21.232 "trsvcid": "4420", 00:20:21.232 "trtype": "TCP" 00:20:21.232 } 00:20:21.232 ], 00:20:21.232 "max_cntlid": 65519, 00:20:21.232 "max_namespaces": 32, 00:20:21.232 "min_cntlid": 1, 00:20:21.232 "model_number": "SPDK bdev Controller", 00:20:21.232 "namespaces": [ 00:20:21.232 { 00:20:21.232 "bdev_name": "Null3", 00:20:21.232 "name": "Null3", 00:20:21.232 "nguid": "360CA7BA2FA44F26ACCAFBE55B308239", 00:20:21.232 "nsid": 1, 00:20:21.232 "uuid": "360ca7ba-2fa4-4f26-acca-fbe55b308239" 00:20:21.232 } 00:20:21.232 ], 00:20:21.232 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:20:21.232 "serial_number": "SPDK00000000000003", 00:20:21.232 "subtype": "NVMe" 00:20:21.232 }, 00:20:21.232 { 00:20:21.232 "allow_any_host": true, 00:20:21.232 "hosts": [], 00:20:21.232 "listen_addresses": [ 00:20:21.232 { 00:20:21.232 "adrfam": "IPv4", 00:20:21.232 "traddr": "10.0.0.3", 00:20:21.232 "trsvcid": "4420", 00:20:21.232 "trtype": "TCP" 00:20:21.232 } 00:20:21.232 ], 00:20:21.232 "max_cntlid": 65519, 00:20:21.232 "max_namespaces": 32, 00:20:21.232 "min_cntlid": 1, 00:20:21.232 "model_number": "SPDK bdev Controller", 00:20:21.232 "namespaces": [ 00:20:21.232 { 00:20:21.232 "bdev_name": "Null4", 00:20:21.232 "name": "Null4", 00:20:21.232 "nguid": "C03C2DD5A0814295BF393F415EAC2E78", 00:20:21.232 "nsid": 1, 00:20:21.232 "uuid": "c03c2dd5-a081-4295-bf39-3f415eac2e78" 00:20:21.232 } 00:20:21.232 ], 00:20:21.232 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:20:21.232 "serial_number": "SPDK00000000000004", 00:20:21.232 "subtype": "NVMe" 00:20:21.232 } 00:20:21.232 ] 00:20:21.232 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.232 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:20:21.232 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:20:21.232 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:21.232 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.232 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:21.232 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.232 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:20:21.232 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.232 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:21.232 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.232 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:20:21.232 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:21.232 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.232 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:21.232 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.232 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:20:21.232 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.232 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:21.232 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.232 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:20:21.232 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:20:21.232 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.232 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:21.232 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.232 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:20:21.232 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.232 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:21.490 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.490 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:20:21.490 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:20:21.490 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.490 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:21.490 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.490 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:20:21.490 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.490 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:21.490 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.490 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.3 -s 4430 00:20:21.490 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.490 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:21.490 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.490 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:20:21.490 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.490 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:20:21.490 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:21.490 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.490 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:20:21.490 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:20:21.490 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:20:21.490 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:20:21.490 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:21.490 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:20:21.490 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:21.490 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:20:21.490 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:21.490 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:21.490 rmmod nvme_tcp 00:20:21.490 rmmod nvme_fabrics 00:20:21.490 rmmod nvme_keyring 00:20:21.490 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:21.490 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:20:21.490 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:20:21.490 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 71867 ']' 00:20:21.490 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 71867 00:20:21.490 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 71867 ']' 00:20:21.490 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 71867 00:20:21.490 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:20:21.490 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:21.490 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71867 00:20:21.490 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:21.490 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:21.490 killing process with pid 71867 00:20:21.490 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71867' 00:20:21.490 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 71867 00:20:21.490 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 71867 00:20:21.748 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:21.748 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:21.748 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:21.748 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:20:21.748 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:20:21.748 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:21.748 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:20:21.748 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:21.748 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:21.748 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:21.748 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:21.748 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:21.748 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:21.748 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:21.748 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:21.748 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:21.748 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:21.748 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:21.748 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:21.749 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:21.749 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:21.749 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:21.749 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:21.749 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.749 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:21.749 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.749 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@300 -- # return 0 00:20:21.749 00:20:21.749 real 0m2.216s 00:20:21.749 user 0m5.823s 00:20:21.749 sys 0m0.518s 00:20:21.749 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:21.749 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:20:21.749 ************************************ 00:20:21.749 END TEST nvmf_target_discovery 00:20:21.749 ************************************ 00:20:21.749 12:19:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:20:21.749 12:19:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:21.749 12:19:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:21.749 12:19:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:22.009 ************************************ 00:20:22.009 START TEST nvmf_referrals 00:20:22.009 ************************************ 00:20:22.009 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:20:22.009 * Looking for test storage... 00:20:22.009 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:22.009 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:22.009 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:20:22.009 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:22.009 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:22.009 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:22.009 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:22.009 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:22.009 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:20:22.009 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:20:22.009 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:20:22.009 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:20:22.009 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:20:22.009 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:20:22.009 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:20:22.009 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:22.009 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:20:22.009 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:20:22.009 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:22.009 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:22.009 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:20:22.009 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:20:22.009 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:22.009 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:20:22.009 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:20:22.009 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:20:22.009 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:20:22.009 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:22.009 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:20:22.009 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:20:22.009 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:22.009 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:22.009 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:20:22.009 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:22.009 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:22.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.009 --rc genhtml_branch_coverage=1 00:20:22.009 --rc genhtml_function_coverage=1 00:20:22.009 --rc genhtml_legend=1 00:20:22.009 --rc geninfo_all_blocks=1 00:20:22.009 --rc geninfo_unexecuted_blocks=1 00:20:22.009 00:20:22.009 ' 00:20:22.009 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:22.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.009 --rc genhtml_branch_coverage=1 00:20:22.009 --rc genhtml_function_coverage=1 00:20:22.009 --rc genhtml_legend=1 00:20:22.009 --rc geninfo_all_blocks=1 00:20:22.009 --rc geninfo_unexecuted_blocks=1 00:20:22.009 00:20:22.009 ' 00:20:22.009 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:22.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.009 --rc genhtml_branch_coverage=1 00:20:22.009 --rc genhtml_function_coverage=1 00:20:22.009 --rc genhtml_legend=1 00:20:22.009 --rc geninfo_all_blocks=1 00:20:22.009 --rc geninfo_unexecuted_blocks=1 00:20:22.009 00:20:22.009 ' 00:20:22.009 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:22.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.009 --rc genhtml_branch_coverage=1 00:20:22.009 --rc genhtml_function_coverage=1 00:20:22.009 --rc genhtml_legend=1 00:20:22.009 --rc geninfo_all_blocks=1 00:20:22.009 --rc geninfo_unexecuted_blocks=1 00:20:22.009 00:20:22.009 ' 00:20:22.009 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:22.009 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:20:22.009 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:22.009 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:22.009 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:22.009 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:22.010 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:22.010 Cannot find device "nvmf_init_br" 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # true 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:22.010 Cannot find device "nvmf_init_br2" 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # true 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:22.010 Cannot find device "nvmf_tgt_br" 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@164 -- # true 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:22.010 Cannot find device "nvmf_tgt_br2" 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@165 -- # true 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:22.010 Cannot find device "nvmf_init_br" 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@166 -- # true 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:22.010 Cannot find device "nvmf_init_br2" 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@167 -- # true 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:22.010 Cannot find device "nvmf_tgt_br" 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@168 -- # true 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:22.010 Cannot find device "nvmf_tgt_br2" 00:20:22.010 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # true 00:20:22.011 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:22.011 Cannot find device "nvmf_br" 00:20:22.011 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@170 -- # true 00:20:22.011 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:22.011 Cannot find device "nvmf_init_if" 00:20:22.011 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # true 00:20:22.011 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:22.269 Cannot find device "nvmf_init_if2" 00:20:22.269 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # true 00:20:22.269 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:22.269 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:22.269 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@173 -- # true 00:20:22.269 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:22.269 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:22.269 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@174 -- # true 00:20:22.269 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:22.269 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:22.269 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:22.269 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:22.269 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:22.269 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:22.269 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:22.269 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:22.269 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:22.269 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:22.269 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:22.269 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:22.269 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:22.269 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:22.269 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:22.269 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:22.269 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:22.269 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:22.269 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:22.269 12:19:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:22.269 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:22.269 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:22.269 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:22.269 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:22.269 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:22.269 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:22.269 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:22.269 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:22.269 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:22.269 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:22.269 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:22.269 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:22.269 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:22.269 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:22.269 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:20:22.269 00:20:22.269 --- 10.0.0.3 ping statistics --- 00:20:22.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.269 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:20:22.269 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:22.270 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:22.270 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:20:22.270 00:20:22.270 --- 10.0.0.4 ping statistics --- 00:20:22.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.270 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:20:22.270 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:22.270 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:22.270 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:20:22.270 00:20:22.270 --- 10.0.0.1 ping statistics --- 00:20:22.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.270 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:20:22.270 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:22.270 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:22.270 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.037 ms 00:20:22.270 00:20:22.270 --- 10.0.0.2 ping statistics --- 00:20:22.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.270 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:20:22.270 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:22.270 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@461 -- # return 0 00:20:22.270 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:22.270 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:22.270 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:22.270 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:22.270 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:22.270 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:22.270 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:22.270 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:20:22.270 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:22.270 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:22.270 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:22.270 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=72145 00:20:22.270 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:22.270 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 72145 00:20:22.270 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 72145 ']' 00:20:22.270 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:22.270 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:22.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:22.270 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:22.270 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:22.270 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:22.270 [2024-11-25 12:19:18.133910] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:20:22.270 [2024-11-25 12:19:18.133965] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:22.528 [2024-11-25 12:19:18.276299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:22.528 [2024-11-25 12:19:18.313522] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:22.528 [2024-11-25 12:19:18.313570] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:22.528 [2024-11-25 12:19:18.313580] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:22.528 [2024-11-25 12:19:18.313588] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:22.528 [2024-11-25 12:19:18.313596] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:22.528 [2024-11-25 12:19:18.314332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:22.528 [2024-11-25 12:19:18.314395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:22.528 [2024-11-25 12:19:18.315582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:22.528 [2024-11-25 12:19:18.315589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:23.462 [2024-11-25 12:19:19.042769] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.3 -s 8009 discovery 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:23.462 [2024-11-25 12:19:19.054891] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid=5934b511-67a7-4c73-84f0-52fdf36bb79d -t tcp -a 10.0.0.3 -s 8009 -o json 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid=5934b511-67a7-4c73-84f0-52fdf36bb79d -t tcp -a 10.0.0.3 -s 8009 -o json 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:20:23.462 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:20:23.720 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:20:23.720 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:20:23.720 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:20:23.720 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.720 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:23.720 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.720 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:20:23.720 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.720 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:23.720 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.720 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:20:23.720 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:20:23.720 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:20:23.720 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:20:23.720 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.720 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:23.720 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:20:23.720 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.720 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:20:23.720 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:20:23.720 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:20:23.720 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:20:23.720 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:20:23.720 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid=5934b511-67a7-4c73-84f0-52fdf36bb79d -t tcp -a 10.0.0.3 -s 8009 -o json 00:20:23.720 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:20:23.720 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:20:23.979 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:20:23.979 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:20:23.979 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:20:23.979 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:20:23.979 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:20:23.979 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid=5934b511-67a7-4c73-84f0-52fdf36bb79d -t tcp -a 10.0.0.3 -s 8009 -o json 00:20:23.979 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:20:23.979 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:20:23.979 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:20:23.979 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:20:23.979 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:20:23.979 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid=5934b511-67a7-4c73-84f0-52fdf36bb79d -t tcp -a 10.0.0.3 -s 8009 -o json 00:20:23.979 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:20:23.979 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:20:23.979 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:20:23.979 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.979 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:23.979 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.238 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:20:24.238 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:20:24.238 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:20:24.238 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.238 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:24.238 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:20:24.238 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:20:24.238 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.238 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:20:24.238 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:20:24.238 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:20:24.238 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:20:24.238 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:20:24.238 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid=5934b511-67a7-4c73-84f0-52fdf36bb79d -t tcp -a 10.0.0.3 -s 8009 -o json 00:20:24.238 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:20:24.238 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:20:24.238 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:20:24.238 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:20:24.238 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:20:24.238 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:20:24.238 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:20:24.238 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid=5934b511-67a7-4c73-84f0-52fdf36bb79d -t tcp -a 10.0.0.3 -s 8009 -o json 00:20:24.238 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:20:24.238 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:20:24.238 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:20:24.238 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:20:24.238 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:20:24.238 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid=5934b511-67a7-4c73-84f0-52fdf36bb79d -t tcp -a 10.0.0.3 -s 8009 -o json 00:20:24.238 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:20:24.498 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:20:24.498 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:20:24.498 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.498 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:24.498 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.498 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:20:24.498 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.498 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:24.498 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:20:24.498 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.498 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:20:24.498 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:20:24.498 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:20:24.498 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:20:24.498 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:20:24.498 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid=5934b511-67a7-4c73-84f0-52fdf36bb79d -t tcp -a 10.0.0.3 -s 8009 -o json 00:20:24.498 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:20:24.756 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:20:24.756 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:20:24.756 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:20:24.756 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:20:24.756 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:24.756 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:20:25.322 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:25.322 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:20:25.322 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:25.322 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:25.322 rmmod nvme_tcp 00:20:25.322 rmmod nvme_fabrics 00:20:25.322 rmmod nvme_keyring 00:20:25.322 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:25.322 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:20:25.322 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:20:25.322 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 72145 ']' 00:20:25.322 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 72145 00:20:25.322 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 72145 ']' 00:20:25.322 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 72145 00:20:25.322 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:20:25.322 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:25.323 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72145 00:20:25.323 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:25.323 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:25.323 killing process with pid 72145 00:20:25.323 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72145' 00:20:25.323 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 72145 00:20:25.323 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 72145 00:20:25.323 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:25.323 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:25.323 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:25.323 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:20:25.323 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:20:25.323 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:25.323 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:20:25.323 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:25.323 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:25.323 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:25.323 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:25.323 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:25.323 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:25.323 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:25.323 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:25.323 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:25.323 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:25.323 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:25.581 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:25.581 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:25.581 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:25.581 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:25.581 ************************************ 00:20:25.581 END TEST nvmf_referrals 00:20:25.581 ************************************ 00:20:25.581 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:25.581 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.581 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:25.581 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:25.581 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@300 -- # return 0 00:20:25.581 00:20:25.581 real 0m3.724s 00:20:25.581 user 0m11.917s 00:20:25.581 sys 0m0.753s 00:20:25.581 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:25.581 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:20:25.581 12:19:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:20:25.581 12:19:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:25.581 12:19:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:25.581 12:19:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:25.581 ************************************ 00:20:25.581 START TEST nvmf_connect_disconnect 00:20:25.581 ************************************ 00:20:25.581 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:20:25.841 * Looking for test storage... 00:20:25.841 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:25.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.841 --rc genhtml_branch_coverage=1 00:20:25.841 --rc genhtml_function_coverage=1 00:20:25.841 --rc genhtml_legend=1 00:20:25.841 --rc geninfo_all_blocks=1 00:20:25.841 --rc geninfo_unexecuted_blocks=1 00:20:25.841 00:20:25.841 ' 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:25.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.841 --rc genhtml_branch_coverage=1 00:20:25.841 --rc genhtml_function_coverage=1 00:20:25.841 --rc genhtml_legend=1 00:20:25.841 --rc geninfo_all_blocks=1 00:20:25.841 --rc geninfo_unexecuted_blocks=1 00:20:25.841 00:20:25.841 ' 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:25.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.841 --rc genhtml_branch_coverage=1 00:20:25.841 --rc genhtml_function_coverage=1 00:20:25.841 --rc genhtml_legend=1 00:20:25.841 --rc geninfo_all_blocks=1 00:20:25.841 --rc geninfo_unexecuted_blocks=1 00:20:25.841 00:20:25.841 ' 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:25.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:25.841 --rc genhtml_branch_coverage=1 00:20:25.841 --rc genhtml_function_coverage=1 00:20:25.841 --rc genhtml_legend=1 00:20:25.841 --rc geninfo_all_blocks=1 00:20:25.841 --rc geninfo_unexecuted_blocks=1 00:20:25.841 00:20:25.841 ' 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.841 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:25.842 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:25.842 Cannot find device "nvmf_init_br" 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # true 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:25.842 Cannot find device "nvmf_init_br2" 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # true 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:25.842 Cannot find device "nvmf_tgt_br" 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@164 -- # true 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:25.842 Cannot find device "nvmf_tgt_br2" 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@165 -- # true 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:25.842 Cannot find device "nvmf_init_br" 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@166 -- # true 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:25.842 Cannot find device "nvmf_init_br2" 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@167 -- # true 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:25.842 Cannot find device "nvmf_tgt_br" 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@168 -- # true 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:25.842 Cannot find device "nvmf_tgt_br2" 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # true 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:25.842 Cannot find device "nvmf_br" 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@170 -- # true 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:25.842 Cannot find device "nvmf_init_if" 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # true 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:25.842 Cannot find device "nvmf_init_if2" 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # true 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:25.842 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@173 -- # true 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:25.842 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@174 -- # true 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:25.842 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:26.101 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:26.101 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:26.101 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:26.101 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:26.101 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:26.101 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:26.101 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:26.101 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:26.101 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:26.101 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:26.101 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:26.101 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:26.101 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:26.101 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:26.101 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:26.101 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:26.101 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:26.101 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:26.101 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:26.101 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:26.102 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:26.102 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:26.102 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:26.102 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:26.102 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:26.102 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:26.102 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:26.102 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:26.102 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:20:26.102 00:20:26.102 --- 10.0.0.3 ping statistics --- 00:20:26.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.102 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:20:26.102 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:26.102 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:26.102 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.066 ms 00:20:26.102 00:20:26.102 --- 10.0.0.4 ping statistics --- 00:20:26.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.102 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:20:26.102 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:26.102 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:26.102 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:20:26.102 00:20:26.102 --- 10.0.0.1 ping statistics --- 00:20:26.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.102 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:20:26.102 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:26.102 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:26.102 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:20:26.102 00:20:26.102 --- 10.0.0.2 ping statistics --- 00:20:26.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.102 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:20:26.102 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:26.102 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@461 -- # return 0 00:20:26.102 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:26.102 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:26.102 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:26.102 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:26.102 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:26.102 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:26.102 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:26.102 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:20:26.102 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:26.102 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:26.102 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:20:26.102 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=72502 00:20:26.102 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 72502 00:20:26.102 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 72502 ']' 00:20:26.102 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.102 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:26.102 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.102 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:26.102 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:20:26.102 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:26.102 [2024-11-25 12:19:21.892456] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:20:26.102 [2024-11-25 12:19:21.892516] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:26.360 [2024-11-25 12:19:22.032094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:26.360 [2024-11-25 12:19:22.067656] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:26.360 [2024-11-25 12:19:22.067697] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:26.360 [2024-11-25 12:19:22.067703] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:26.360 [2024-11-25 12:19:22.067708] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:26.360 [2024-11-25 12:19:22.067712] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:26.360 [2024-11-25 12:19:22.068425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.360 [2024-11-25 12:19:22.068516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:26.360 [2024-11-25 12:19:22.069133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:26.360 [2024-11-25 12:19:22.069136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:26.925 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:26.925 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:20:26.925 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:26.925 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:27.194 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:20:27.194 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:27.194 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:20:27.194 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.194 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:20:27.194 [2024-11-25 12:19:22.839353] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:27.194 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.194 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:20:27.194 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.194 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:20:27.194 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.194 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:20:27.194 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:27.194 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.194 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:20:27.194 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.194 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:27.194 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.194 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:20:27.194 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.194 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:27.194 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.194 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:20:27.194 [2024-11-25 12:19:22.892860] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:27.194 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.194 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:20:27.194 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:20:27.194 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:20:29.847 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:31.744 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:34.336 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:36.269 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:38.167 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:38.425 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:20:38.425 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:20:38.425 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:38.425 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:20:38.425 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:38.425 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:20:38.425 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:38.425 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:38.425 rmmod nvme_tcp 00:20:38.425 rmmod nvme_fabrics 00:20:38.425 rmmod nvme_keyring 00:20:38.425 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:38.425 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:20:38.425 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:20:38.425 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 72502 ']' 00:20:38.425 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 72502 00:20:38.425 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 72502 ']' 00:20:38.425 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 72502 00:20:38.425 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:20:38.425 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:38.425 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72502 00:20:38.425 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:38.425 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:38.425 killing process with pid 72502 00:20:38.425 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72502' 00:20:38.425 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 72502 00:20:38.425 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 72502 00:20:38.425 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:38.425 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:38.425 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:38.425 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:20:38.425 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:38.425 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:20:38.425 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:20:38.425 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:38.425 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:38.425 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:38.682 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:38.682 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:38.682 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:38.682 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:38.682 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:38.682 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:38.682 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:38.682 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:38.682 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:38.682 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:38.682 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:38.682 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:38.682 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:38.682 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:38.682 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:38.682 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:38.682 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@300 -- # return 0 00:20:38.682 00:20:38.682 real 0m13.099s 00:20:38.682 user 0m48.429s 00:20:38.682 sys 0m1.406s 00:20:38.682 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:38.682 ************************************ 00:20:38.682 END TEST nvmf_connect_disconnect 00:20:38.682 ************************************ 00:20:38.682 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:20:38.682 12:19:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:20:38.682 12:19:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:38.682 12:19:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:38.682 12:19:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:38.682 ************************************ 00:20:38.682 START TEST nvmf_multitarget 00:20:38.682 ************************************ 00:20:38.682 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:20:38.940 * Looking for test storage... 00:20:38.940 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:38.940 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:38.940 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:20:38.940 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:38.940 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:38.940 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:38.940 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:38.940 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:38.940 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:20:38.940 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:20:38.940 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:20:38.940 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:20:38.940 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:20:38.940 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:20:38.940 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:20:38.940 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:38.940 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:20:38.940 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:20:38.940 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:38.940 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:38.940 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:20:38.940 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:20:38.940 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:38.940 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:20:38.940 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:20:38.940 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:20:38.940 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:20:38.940 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:38.940 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:20:38.940 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:20:38.940 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:38.940 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:38.940 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:20:38.940 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:38.940 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:38.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.940 --rc genhtml_branch_coverage=1 00:20:38.940 --rc genhtml_function_coverage=1 00:20:38.940 --rc genhtml_legend=1 00:20:38.940 --rc geninfo_all_blocks=1 00:20:38.940 --rc geninfo_unexecuted_blocks=1 00:20:38.940 00:20:38.940 ' 00:20:38.940 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:38.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.940 --rc genhtml_branch_coverage=1 00:20:38.940 --rc genhtml_function_coverage=1 00:20:38.940 --rc genhtml_legend=1 00:20:38.940 --rc geninfo_all_blocks=1 00:20:38.940 --rc geninfo_unexecuted_blocks=1 00:20:38.940 00:20:38.940 ' 00:20:38.940 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:38.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.941 --rc genhtml_branch_coverage=1 00:20:38.941 --rc genhtml_function_coverage=1 00:20:38.941 --rc genhtml_legend=1 00:20:38.941 --rc geninfo_all_blocks=1 00:20:38.941 --rc geninfo_unexecuted_blocks=1 00:20:38.941 00:20:38.941 ' 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:38.941 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.941 --rc genhtml_branch_coverage=1 00:20:38.941 --rc genhtml_function_coverage=1 00:20:38.941 --rc genhtml_legend=1 00:20:38.941 --rc geninfo_all_blocks=1 00:20:38.941 --rc geninfo_unexecuted_blocks=1 00:20:38.941 00:20:38.941 ' 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:38.941 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:38.941 Cannot find device "nvmf_init_br" 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # true 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:38.941 Cannot find device "nvmf_init_br2" 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # true 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:38.941 Cannot find device "nvmf_tgt_br" 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@164 -- # true 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:38.941 Cannot find device "nvmf_tgt_br2" 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@165 -- # true 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:38.941 Cannot find device "nvmf_init_br" 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@166 -- # true 00:20:38.941 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:38.941 Cannot find device "nvmf_init_br2" 00:20:38.942 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@167 -- # true 00:20:38.942 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:38.942 Cannot find device "nvmf_tgt_br" 00:20:38.942 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@168 -- # true 00:20:38.942 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:38.942 Cannot find device "nvmf_tgt_br2" 00:20:38.942 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # true 00:20:38.942 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:38.942 Cannot find device "nvmf_br" 00:20:38.942 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@170 -- # true 00:20:38.942 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:38.942 Cannot find device "nvmf_init_if" 00:20:38.942 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # true 00:20:38.942 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:38.942 Cannot find device "nvmf_init_if2" 00:20:38.942 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # true 00:20:38.942 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:38.942 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:38.942 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@173 -- # true 00:20:38.942 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:38.942 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:38.942 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@174 -- # true 00:20:38.942 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:38.942 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:39.200 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:39.200 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:39.200 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:39.200 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:39.200 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:39.200 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:39.200 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:39.200 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:39.200 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:39.200 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:39.200 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:39.200 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:39.200 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:39.200 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:39.200 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:39.200 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:39.200 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:39.200 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:39.200 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:39.200 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:39.200 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:39.200 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:39.200 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:39.200 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:39.200 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:39.200 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:39.200 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:39.200 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:39.200 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:39.200 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:39.200 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:39.200 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:39.200 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:20:39.200 00:20:39.200 --- 10.0.0.3 ping statistics --- 00:20:39.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.200 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:20:39.200 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:39.200 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:39.200 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:20:39.200 00:20:39.200 --- 10.0.0.4 ping statistics --- 00:20:39.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.200 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:20:39.200 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:39.200 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:39.200 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:20:39.200 00:20:39.200 --- 10.0.0.1 ping statistics --- 00:20:39.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.200 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:20:39.200 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:39.200 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:39.200 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.032 ms 00:20:39.200 00:20:39.200 --- 10.0.0.2 ping statistics --- 00:20:39.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:39.200 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:20:39.200 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:39.200 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@461 -- # return 0 00:20:39.200 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:39.200 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:39.200 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:39.200 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:39.200 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:39.200 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:39.200 12:19:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:39.200 12:19:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:20:39.200 12:19:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:39.200 12:19:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:39.200 12:19:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:20:39.200 12:19:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=72953 00:20:39.200 12:19:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 72953 00:20:39.200 12:19:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 72953 ']' 00:20:39.200 12:19:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:39.200 12:19:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:39.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:39.200 12:19:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:39.200 12:19:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:39.200 12:19:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:20:39.200 12:19:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:39.200 [2024-11-25 12:19:35.046553] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:20:39.200 [2024-11-25 12:19:35.046600] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:39.514 [2024-11-25 12:19:35.186978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:39.514 [2024-11-25 12:19:35.222909] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:39.514 [2024-11-25 12:19:35.222948] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:39.514 [2024-11-25 12:19:35.222954] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:39.514 [2024-11-25 12:19:35.222959] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:39.514 [2024-11-25 12:19:35.222964] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:39.514 [2024-11-25 12:19:35.223615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:39.514 [2024-11-25 12:19:35.224351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:39.514 [2024-11-25 12:19:35.224838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:39.514 [2024-11-25 12:19:35.224839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.079 12:19:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:40.079 12:19:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:20:40.079 12:19:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:40.079 12:19:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:40.079 12:19:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:20:40.337 12:19:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:40.337 12:19:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:20:40.337 12:19:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:20:40.337 12:19:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:20:40.337 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:20:40.337 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:20:40.337 "nvmf_tgt_1" 00:20:40.337 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:20:40.595 "nvmf_tgt_2" 00:20:40.595 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:20:40.595 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:20:40.595 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:20:40.595 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:20:40.595 true 00:20:40.853 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:20:40.853 true 00:20:40.853 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:20:40.853 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:20:40.853 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:20:40.853 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:20:40.853 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:20:40.853 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:40.853 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:20:41.111 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:41.111 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:20:41.111 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:41.111 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:41.111 rmmod nvme_tcp 00:20:41.111 rmmod nvme_fabrics 00:20:41.111 rmmod nvme_keyring 00:20:41.111 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:41.111 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:20:41.111 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:20:41.111 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 72953 ']' 00:20:41.111 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 72953 00:20:41.111 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 72953 ']' 00:20:41.111 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 72953 00:20:41.111 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:20:41.111 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:41.111 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72953 00:20:41.111 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:41.111 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:41.111 killing process with pid 72953 00:20:41.111 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72953' 00:20:41.111 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 72953 00:20:41.111 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 72953 00:20:41.111 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:41.111 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:41.111 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:41.111 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:20:41.111 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:20:41.111 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:41.111 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:20:41.111 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:41.111 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:41.111 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:41.111 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:41.111 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:41.111 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:41.369 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:41.369 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:41.369 12:19:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:41.369 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:41.369 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:41.369 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:41.369 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:41.369 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:41.369 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:41.369 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:41.369 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:41.369 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:41.369 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:41.369 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@300 -- # return 0 00:20:41.369 00:20:41.369 real 0m2.618s 00:20:41.369 user 0m7.767s 00:20:41.369 sys 0m0.585s 00:20:41.369 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:41.369 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:20:41.369 ************************************ 00:20:41.369 END TEST nvmf_multitarget 00:20:41.369 ************************************ 00:20:41.369 12:19:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:20:41.369 12:19:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:41.369 12:19:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:41.369 12:19:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:41.369 ************************************ 00:20:41.369 START TEST nvmf_rpc 00:20:41.369 ************************************ 00:20:41.369 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:20:41.628 * Looking for test storage... 00:20:41.628 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:41.628 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:41.628 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:20:41.628 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:41.628 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:41.628 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:41.628 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:41.628 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:41.628 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:20:41.628 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:20:41.628 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:20:41.628 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:20:41.628 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:20:41.628 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:20:41.628 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:20:41.628 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:41.628 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:20:41.628 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:20:41.628 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:41.628 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:41.628 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:20:41.628 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:20:41.628 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:41.628 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:20:41.628 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:20:41.628 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:20:41.628 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:20:41.628 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:41.628 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:20:41.628 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:20:41.628 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:41.628 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:41.628 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:20:41.628 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:41.628 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:41.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.628 --rc genhtml_branch_coverage=1 00:20:41.628 --rc genhtml_function_coverage=1 00:20:41.628 --rc genhtml_legend=1 00:20:41.628 --rc geninfo_all_blocks=1 00:20:41.628 --rc geninfo_unexecuted_blocks=1 00:20:41.628 00:20:41.628 ' 00:20:41.628 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:41.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.628 --rc genhtml_branch_coverage=1 00:20:41.628 --rc genhtml_function_coverage=1 00:20:41.628 --rc genhtml_legend=1 00:20:41.628 --rc geninfo_all_blocks=1 00:20:41.628 --rc geninfo_unexecuted_blocks=1 00:20:41.628 00:20:41.628 ' 00:20:41.628 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:41.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.628 --rc genhtml_branch_coverage=1 00:20:41.628 --rc genhtml_function_coverage=1 00:20:41.628 --rc genhtml_legend=1 00:20:41.628 --rc geninfo_all_blocks=1 00:20:41.628 --rc geninfo_unexecuted_blocks=1 00:20:41.628 00:20:41.628 ' 00:20:41.628 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:41.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.628 --rc genhtml_branch_coverage=1 00:20:41.628 --rc genhtml_function_coverage=1 00:20:41.628 --rc genhtml_legend=1 00:20:41.628 --rc geninfo_all_blocks=1 00:20:41.628 --rc geninfo_unexecuted_blocks=1 00:20:41.628 00:20:41.628 ' 00:20:41.628 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:41.629 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:41.629 Cannot find device "nvmf_init_br" 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # true 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:41.629 Cannot find device "nvmf_init_br2" 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # true 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:41.629 Cannot find device "nvmf_tgt_br" 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@164 -- # true 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:41.629 Cannot find device "nvmf_tgt_br2" 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@165 -- # true 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:41.629 Cannot find device "nvmf_init_br" 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@166 -- # true 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:41.629 Cannot find device "nvmf_init_br2" 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@167 -- # true 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:41.629 Cannot find device "nvmf_tgt_br" 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@168 -- # true 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:41.629 Cannot find device "nvmf_tgt_br2" 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # true 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:41.629 Cannot find device "nvmf_br" 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@170 -- # true 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:41.629 Cannot find device "nvmf_init_if" 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # true 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:41.629 Cannot find device "nvmf_init_if2" 00:20:41.629 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # true 00:20:41.630 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:41.630 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:41.630 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@173 -- # true 00:20:41.630 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:41.630 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:41.630 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@174 -- # true 00:20:41.630 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:41.630 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:41.630 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:41.630 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:41.630 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:41.887 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:41.887 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:41.887 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:41.888 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:41.888 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:41.888 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:41.888 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:41.888 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:41.888 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:41.888 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:41.888 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:41.888 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:41.888 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:41.888 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:41.888 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:41.888 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:41.888 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:41.888 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:41.888 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:41.888 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:41.888 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:41.888 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:41.888 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:41.888 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:41.888 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:41.888 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:41.888 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:41.888 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:41.888 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:41.888 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:20:41.888 00:20:41.888 --- 10.0.0.3 ping statistics --- 00:20:41.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:41.888 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:20:41.888 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:41.888 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:41.888 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:20:41.888 00:20:41.888 --- 10.0.0.4 ping statistics --- 00:20:41.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:41.888 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:20:41.888 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:41.888 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:41.888 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:20:41.888 00:20:41.888 --- 10.0.0.1 ping statistics --- 00:20:41.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:41.888 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:20:41.888 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:41.888 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:41.888 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.041 ms 00:20:41.888 00:20:41.888 --- 10.0.0.2 ping statistics --- 00:20:41.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:41.888 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:20:41.888 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:41.888 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@461 -- # return 0 00:20:41.888 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:41.888 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:41.888 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:41.888 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:41.888 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:41.888 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:41.888 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:41.888 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:20:41.888 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:41.888 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:41.888 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:41.888 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=73236 00:20:41.888 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:41.888 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 73236 00:20:41.888 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 73236 ']' 00:20:41.888 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:41.888 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:41.888 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:41.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:41.888 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:41.888 12:19:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:41.888 [2024-11-25 12:19:37.697647] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:20:41.888 [2024-11-25 12:19:37.697699] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:42.147 [2024-11-25 12:19:37.833829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:42.147 [2024-11-25 12:19:37.869291] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:42.147 [2024-11-25 12:19:37.869324] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:42.147 [2024-11-25 12:19:37.869331] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:42.147 [2024-11-25 12:19:37.869336] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:42.147 [2024-11-25 12:19:37.869340] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:42.147 [2024-11-25 12:19:37.870401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:42.147 [2024-11-25 12:19:37.870722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:42.147 [2024-11-25 12:19:37.870796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:42.147 [2024-11-25 12:19:37.870799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:42.714 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:42.714 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:20:42.714 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:42.714 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:42.714 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:42.714 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:42.714 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:20:42.714 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.714 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:42.714 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.714 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:20:42.714 "poll_groups": [ 00:20:42.714 { 00:20:42.714 "admin_qpairs": 0, 00:20:42.714 "completed_nvme_io": 0, 00:20:42.714 "current_admin_qpairs": 0, 00:20:42.714 "current_io_qpairs": 0, 00:20:42.714 "io_qpairs": 0, 00:20:42.714 "name": "nvmf_tgt_poll_group_000", 00:20:42.714 "pending_bdev_io": 0, 00:20:42.714 "transports": [] 00:20:42.714 }, 00:20:42.714 { 00:20:42.714 "admin_qpairs": 0, 00:20:42.714 "completed_nvme_io": 0, 00:20:42.714 "current_admin_qpairs": 0, 00:20:42.714 "current_io_qpairs": 0, 00:20:42.714 "io_qpairs": 0, 00:20:42.714 "name": "nvmf_tgt_poll_group_001", 00:20:42.714 "pending_bdev_io": 0, 00:20:42.714 "transports": [] 00:20:42.714 }, 00:20:42.714 { 00:20:42.714 "admin_qpairs": 0, 00:20:42.714 "completed_nvme_io": 0, 00:20:42.714 "current_admin_qpairs": 0, 00:20:42.714 "current_io_qpairs": 0, 00:20:42.714 "io_qpairs": 0, 00:20:42.714 "name": "nvmf_tgt_poll_group_002", 00:20:42.714 "pending_bdev_io": 0, 00:20:42.714 "transports": [] 00:20:42.714 }, 00:20:42.714 { 00:20:42.714 "admin_qpairs": 0, 00:20:42.714 "completed_nvme_io": 0, 00:20:42.714 "current_admin_qpairs": 0, 00:20:42.714 "current_io_qpairs": 0, 00:20:42.714 "io_qpairs": 0, 00:20:42.714 "name": "nvmf_tgt_poll_group_003", 00:20:42.714 "pending_bdev_io": 0, 00:20:42.714 "transports": [] 00:20:42.714 } 00:20:42.714 ], 00:20:42.714 "tick_rate": 2600000000 00:20:42.714 }' 00:20:42.714 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:20:42.714 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:20:42.714 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:20:42.714 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:20:42.973 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:20:42.973 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:20:42.973 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:20:42.973 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:42.973 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.973 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:42.973 [2024-11-25 12:19:38.642883] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:42.973 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.973 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:20:42.973 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.973 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:42.973 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.973 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:20:42.973 "poll_groups": [ 00:20:42.973 { 00:20:42.973 "admin_qpairs": 0, 00:20:42.973 "completed_nvme_io": 0, 00:20:42.973 "current_admin_qpairs": 0, 00:20:42.973 "current_io_qpairs": 0, 00:20:42.973 "io_qpairs": 0, 00:20:42.973 "name": "nvmf_tgt_poll_group_000", 00:20:42.973 "pending_bdev_io": 0, 00:20:42.973 "transports": [ 00:20:42.973 { 00:20:42.973 "trtype": "TCP" 00:20:42.973 } 00:20:42.973 ] 00:20:42.973 }, 00:20:42.973 { 00:20:42.973 "admin_qpairs": 0, 00:20:42.973 "completed_nvme_io": 0, 00:20:42.973 "current_admin_qpairs": 0, 00:20:42.973 "current_io_qpairs": 0, 00:20:42.973 "io_qpairs": 0, 00:20:42.973 "name": "nvmf_tgt_poll_group_001", 00:20:42.973 "pending_bdev_io": 0, 00:20:42.973 "transports": [ 00:20:42.973 { 00:20:42.973 "trtype": "TCP" 00:20:42.973 } 00:20:42.973 ] 00:20:42.973 }, 00:20:42.973 { 00:20:42.973 "admin_qpairs": 0, 00:20:42.973 "completed_nvme_io": 0, 00:20:42.973 "current_admin_qpairs": 0, 00:20:42.973 "current_io_qpairs": 0, 00:20:42.973 "io_qpairs": 0, 00:20:42.973 "name": "nvmf_tgt_poll_group_002", 00:20:42.973 "pending_bdev_io": 0, 00:20:42.973 "transports": [ 00:20:42.973 { 00:20:42.973 "trtype": "TCP" 00:20:42.973 } 00:20:42.973 ] 00:20:42.973 }, 00:20:42.973 { 00:20:42.973 "admin_qpairs": 0, 00:20:42.973 "completed_nvme_io": 0, 00:20:42.973 "current_admin_qpairs": 0, 00:20:42.973 "current_io_qpairs": 0, 00:20:42.973 "io_qpairs": 0, 00:20:42.973 "name": "nvmf_tgt_poll_group_003", 00:20:42.973 "pending_bdev_io": 0, 00:20:42.973 "transports": [ 00:20:42.973 { 00:20:42.973 "trtype": "TCP" 00:20:42.973 } 00:20:42.973 ] 00:20:42.973 } 00:20:42.973 ], 00:20:42.973 "tick_rate": 2600000000 00:20:42.973 }' 00:20:42.973 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:20:42.973 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:20:42.973 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:20:42.973 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:20:42.973 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:20:42.973 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:20:42.973 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:20:42.973 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:20:42.973 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:20:42.973 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:20:42.973 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:20:42.973 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:20:42.973 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:20:42.973 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:42.973 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.973 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:42.973 Malloc1 00:20:42.973 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.973 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:42.973 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.973 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:42.973 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.973 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:42.973 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.973 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:42.973 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.973 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:20:42.973 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.973 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:42.973 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.973 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:42.973 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.973 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:42.973 [2024-11-25 12:19:38.794132] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:42.973 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.973 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid=5934b511-67a7-4c73-84f0-52fdf36bb79d -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -a 10.0.0.3 -s 4420 00:20:42.973 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:20:42.974 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid=5934b511-67a7-4c73-84f0-52fdf36bb79d -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -a 10.0.0.3 -s 4420 00:20:42.974 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:20:42.974 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:42.974 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:20:42.974 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:42.974 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:20:42.974 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:42.974 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:20:42.974 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:20:42.974 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid=5934b511-67a7-4c73-84f0-52fdf36bb79d -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -a 10.0.0.3 -s 4420 00:20:42.974 [2024-11-25 12:19:38.816385] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d' 00:20:42.974 Failed to write to /dev/nvme-fabrics: Input/output error 00:20:42.974 could not add new controller: failed to write to nvme-fabrics device 00:20:42.974 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:20:42.974 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:42.974 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:42.974 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:42.974 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:20:42.974 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.974 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:42.974 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.974 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid=5934b511-67a7-4c73-84f0-52fdf36bb79d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:20:43.231 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:20:43.231 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:20:43.231 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:20:43.231 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:20:43.231 12:19:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:20:45.182 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:20:45.182 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:20:45.182 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:20:45.182 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:20:45.182 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:20:45.182 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:20:45.182 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:45.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:45.442 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:45.442 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:20:45.442 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:20:45.442 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:45.442 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:20:45.442 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:45.442 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:20:45.442 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:20:45.443 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.443 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:45.443 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.443 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid=5934b511-67a7-4c73-84f0-52fdf36bb79d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:20:45.443 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:20:45.443 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid=5934b511-67a7-4c73-84f0-52fdf36bb79d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:20:45.443 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:20:45.443 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:45.443 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:20:45.443 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:45.443 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:20:45.443 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:45.443 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:20:45.443 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:20:45.443 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid=5934b511-67a7-4c73-84f0-52fdf36bb79d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:20:45.443 [2024-11-25 12:19:41.087298] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d' 00:20:45.443 Failed to write to /dev/nvme-fabrics: Input/output error 00:20:45.443 could not add new controller: failed to write to nvme-fabrics device 00:20:45.443 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:20:45.443 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:45.443 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:45.443 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:45.443 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:20:45.443 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.443 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:45.443 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.443 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid=5934b511-67a7-4c73-84f0-52fdf36bb79d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:20:45.443 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:20:45.443 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:20:45.443 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:20:45.443 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:20:45.443 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:20:47.968 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:20:47.968 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:20:47.968 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:20:47.968 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:20:47.968 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:20:47.968 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:20:47.968 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:47.968 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:47.968 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:47.968 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:20:47.968 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:20:47.968 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:47.968 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:20:47.968 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:47.968 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:20:47.968 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:47.968 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.968 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:47.968 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.969 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:20:47.969 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:20:47.969 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:47.969 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.969 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:47.969 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.969 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:47.969 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.969 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:47.969 [2024-11-25 12:19:43.343620] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:47.969 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.969 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:20:47.969 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.969 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:47.969 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.969 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:47.969 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.969 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:47.969 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.969 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid=5934b511-67a7-4c73-84f0-52fdf36bb79d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:20:47.969 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:20:47.969 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:20:47.969 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:20:47.969 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:20:47.969 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:20:49.867 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:20:49.867 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:20:49.867 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:20:49.867 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:20:49.867 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:20:49.867 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:20:49.867 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:49.867 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:49.867 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:49.867 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:20:49.867 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:20:49.868 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:49.868 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:49.868 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:20:49.868 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:20:49.868 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:20:49.868 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.868 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:49.868 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.868 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:49.868 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.868 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:49.868 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.868 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:20:49.868 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:49.868 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.868 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:49.868 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.868 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:49.868 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.868 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:49.868 [2024-11-25 12:19:45.726002] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:49.868 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.868 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:20:49.868 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.868 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:49.868 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.868 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:49.868 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.868 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:50.125 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.125 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid=5934b511-67a7-4c73-84f0-52fdf36bb79d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:20:50.125 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:20:50.125 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:20:50.125 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:20:50.125 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:20:50.125 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:20:52.651 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:20:52.652 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:20:52.652 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:20:52.652 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:20:52.652 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:20:52.652 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:20:52.652 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:52.652 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:52.652 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:52.652 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:20:52.652 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:20:52.652 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:52.652 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:52.652 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:20:52.652 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:20:52.652 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:20:52.652 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.652 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:52.652 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.652 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:52.652 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.652 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:52.652 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.652 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:20:52.652 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:52.652 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.652 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:52.652 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.652 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:52.652 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.652 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:52.652 [2024-11-25 12:19:48.116873] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:52.652 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.652 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:20:52.652 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.652 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:52.652 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.652 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:52.652 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.652 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:52.652 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.652 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid=5934b511-67a7-4c73-84f0-52fdf36bb79d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:20:52.652 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:20:52.652 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:20:52.652 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:20:52.652 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:20:52.652 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:20:54.549 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:20:54.549 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:20:54.549 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:20:54.549 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:20:54.549 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:20:54.549 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:20:54.549 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:54.808 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:54.808 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:54.808 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:20:54.808 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:20:54.808 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:54.808 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:20:54.808 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:54.808 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:20:54.808 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:20:54.808 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.808 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:54.808 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.808 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:54.808 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.808 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:54.808 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.808 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:20:54.808 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:54.808 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.808 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:54.808 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.808 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:54.808 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.808 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:54.808 [2024-11-25 12:19:50.511605] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:54.808 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.808 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:20:54.808 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.808 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:54.808 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.808 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:54.808 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.808 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:54.808 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.808 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid=5934b511-67a7-4c73-84f0-52fdf36bb79d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:20:55.066 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:20:55.066 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:20:55.066 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:20:55.066 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:20:55.066 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:20:56.966 12:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:20:56.966 12:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:20:56.966 12:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:20:56.966 12:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:20:56.966 12:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:20:56.966 12:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:20:56.966 12:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:56.966 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:56.966 12:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:56.966 12:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:20:56.966 12:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:56.966 12:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:20:56.966 12:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:20:56.966 12:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:56.966 12:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:20:56.966 12:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:20:56.966 12:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.966 12:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:56.966 12:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.966 12:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:56.966 12:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.966 12:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:56.966 12:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.966 12:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:20:56.966 12:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:56.966 12:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.966 12:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:56.966 12:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.966 12:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:56.966 12:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.966 12:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:56.966 [2024-11-25 12:19:52.798806] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:56.966 12:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.966 12:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:20:56.966 12:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.966 12:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:56.966 12:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.966 12:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:56.966 12:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.966 12:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:56.966 12:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.966 12:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid=5934b511-67a7-4c73-84f0-52fdf36bb79d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:20:57.224 12:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:20:57.224 12:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:20:57.224 12:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:20:57.224 12:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:20:57.224 12:19:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:20:59.144 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:20:59.144 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:20:59.144 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:20:59.144 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:20:59.144 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:20:59.144 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:20:59.144 12:19:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:59.402 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:59.402 [2024-11-25 12:19:55.101206] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:59.402 [2024-11-25 12:19:55.149240] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:59.402 [2024-11-25 12:19:55.197278] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:59.402 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.403 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:59.403 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.403 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:59.403 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.403 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:59.403 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.403 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:59.403 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.403 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:59.403 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.403 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:59.403 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.403 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:59.403 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.403 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:59.403 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.403 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:20:59.403 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:59.403 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.403 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:59.403 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.403 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:59.403 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.403 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:59.403 [2024-11-25 12:19:55.245319] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:59.403 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.403 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:59.403 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.403 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:59.403 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.403 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:59.403 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.403 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:59.403 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.403 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:59.403 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.403 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:59.403 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.403 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:59.403 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.403 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:59.660 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.660 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:20:59.660 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:59.660 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.660 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:59.660 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.660 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:59.660 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.660 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:59.661 [2024-11-25 12:19:55.293390] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:59.661 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.661 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:59.661 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.661 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:59.661 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.661 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:59.661 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.661 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:59.661 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.661 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:59.661 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.661 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:59.661 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.661 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:59.661 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.661 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:59.661 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.661 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:20:59.661 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.661 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:59.661 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.661 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:20:59.661 "poll_groups": [ 00:20:59.661 { 00:20:59.661 "admin_qpairs": 2, 00:20:59.661 "completed_nvme_io": 116, 00:20:59.661 "current_admin_qpairs": 0, 00:20:59.661 "current_io_qpairs": 0, 00:20:59.661 "io_qpairs": 16, 00:20:59.661 "name": "nvmf_tgt_poll_group_000", 00:20:59.661 "pending_bdev_io": 0, 00:20:59.661 "transports": [ 00:20:59.661 { 00:20:59.661 "trtype": "TCP" 00:20:59.661 } 00:20:59.661 ] 00:20:59.661 }, 00:20:59.661 { 00:20:59.661 "admin_qpairs": 3, 00:20:59.661 "completed_nvme_io": 165, 00:20:59.661 "current_admin_qpairs": 0, 00:20:59.661 "current_io_qpairs": 0, 00:20:59.661 "io_qpairs": 17, 00:20:59.661 "name": "nvmf_tgt_poll_group_001", 00:20:59.661 "pending_bdev_io": 0, 00:20:59.661 "transports": [ 00:20:59.661 { 00:20:59.661 "trtype": "TCP" 00:20:59.661 } 00:20:59.661 ] 00:20:59.661 }, 00:20:59.661 { 00:20:59.661 "admin_qpairs": 1, 00:20:59.661 "completed_nvme_io": 69, 00:20:59.661 "current_admin_qpairs": 0, 00:20:59.661 "current_io_qpairs": 0, 00:20:59.661 "io_qpairs": 19, 00:20:59.661 "name": "nvmf_tgt_poll_group_002", 00:20:59.661 "pending_bdev_io": 0, 00:20:59.661 "transports": [ 00:20:59.661 { 00:20:59.661 "trtype": "TCP" 00:20:59.661 } 00:20:59.661 ] 00:20:59.661 }, 00:20:59.661 { 00:20:59.661 "admin_qpairs": 1, 00:20:59.661 "completed_nvme_io": 70, 00:20:59.661 "current_admin_qpairs": 0, 00:20:59.661 "current_io_qpairs": 0, 00:20:59.661 "io_qpairs": 18, 00:20:59.661 "name": "nvmf_tgt_poll_group_003", 00:20:59.661 "pending_bdev_io": 0, 00:20:59.661 "transports": [ 00:20:59.661 { 00:20:59.661 "trtype": "TCP" 00:20:59.661 } 00:20:59.661 ] 00:20:59.661 } 00:20:59.661 ], 00:20:59.661 "tick_rate": 2600000000 00:20:59.661 }' 00:20:59.661 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:20:59.661 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:20:59.661 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:20:59.661 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:20:59.661 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:20:59.661 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:20:59.661 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:20:59.661 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:20:59.661 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:20:59.661 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:20:59.661 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:20:59.661 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:20:59.661 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:20:59.661 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:59.661 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:20:59.661 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:59.661 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:20:59.661 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:59.661 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:59.661 rmmod nvme_tcp 00:20:59.661 rmmod nvme_fabrics 00:20:59.661 rmmod nvme_keyring 00:20:59.661 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:59.661 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:20:59.661 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:20:59.661 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 73236 ']' 00:20:59.661 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 73236 00:20:59.661 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 73236 ']' 00:20:59.661 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 73236 00:20:59.661 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:20:59.661 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:59.661 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73236 00:20:59.919 killing process with pid 73236 00:20:59.919 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:59.919 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:59.919 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73236' 00:20:59.919 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 73236 00:20:59.919 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 73236 00:20:59.919 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:59.919 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:59.919 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:59.919 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:20:59.919 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:20:59.919 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:59.919 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:20:59.919 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:59.919 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:59.919 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:59.919 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:59.919 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:59.919 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:59.919 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:59.919 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:59.919 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:59.919 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:59.919 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:00.177 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:00.177 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:00.177 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:00.177 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:00.177 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:00.177 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.177 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:00.177 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.177 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@300 -- # return 0 00:21:00.177 00:21:00.177 real 0m18.716s 00:21:00.177 user 1m10.516s 00:21:00.177 sys 0m1.814s 00:21:00.177 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:00.177 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:00.177 ************************************ 00:21:00.177 END TEST nvmf_rpc 00:21:00.177 ************************************ 00:21:00.177 12:19:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:21:00.177 12:19:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:00.177 12:19:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:00.177 12:19:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:00.177 ************************************ 00:21:00.177 START TEST nvmf_invalid 00:21:00.177 ************************************ 00:21:00.177 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:21:00.177 * Looking for test storage... 00:21:00.177 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:00.177 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:00.177 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:21:00.177 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:00.436 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:00.436 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:00.436 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:00.436 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:00.436 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:21:00.436 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:21:00.436 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:21:00.436 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:21:00.436 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:21:00.436 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:21:00.436 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:21:00.436 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:00.436 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:21:00.436 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:21:00.436 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:00.436 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:00.436 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:21:00.436 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:21:00.436 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:00.436 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:21:00.436 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:21:00.436 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:21:00.436 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:21:00.436 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:00.436 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:21:00.436 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:21:00.436 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:00.436 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:00.436 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:21:00.436 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:00.436 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:00.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.436 --rc genhtml_branch_coverage=1 00:21:00.436 --rc genhtml_function_coverage=1 00:21:00.436 --rc genhtml_legend=1 00:21:00.436 --rc geninfo_all_blocks=1 00:21:00.436 --rc geninfo_unexecuted_blocks=1 00:21:00.436 00:21:00.436 ' 00:21:00.436 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:00.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.436 --rc genhtml_branch_coverage=1 00:21:00.436 --rc genhtml_function_coverage=1 00:21:00.436 --rc genhtml_legend=1 00:21:00.436 --rc geninfo_all_blocks=1 00:21:00.436 --rc geninfo_unexecuted_blocks=1 00:21:00.436 00:21:00.436 ' 00:21:00.436 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:00.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.436 --rc genhtml_branch_coverage=1 00:21:00.436 --rc genhtml_function_coverage=1 00:21:00.436 --rc genhtml_legend=1 00:21:00.436 --rc geninfo_all_blocks=1 00:21:00.436 --rc geninfo_unexecuted_blocks=1 00:21:00.436 00:21:00.436 ' 00:21:00.436 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:00.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.436 --rc genhtml_branch_coverage=1 00:21:00.436 --rc genhtml_function_coverage=1 00:21:00.436 --rc genhtml_legend=1 00:21:00.436 --rc geninfo_all_blocks=1 00:21:00.436 --rc geninfo_unexecuted_blocks=1 00:21:00.436 00:21:00.436 ' 00:21:00.436 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:00.436 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:21:00.436 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:00.436 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:00.437 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:00.437 Cannot find device "nvmf_init_br" 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # true 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:00.437 Cannot find device "nvmf_init_br2" 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # true 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:00.437 Cannot find device "nvmf_tgt_br" 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@164 -- # true 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:00.437 Cannot find device "nvmf_tgt_br2" 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@165 -- # true 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:00.437 Cannot find device "nvmf_init_br" 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@166 -- # true 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:00.437 Cannot find device "nvmf_init_br2" 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@167 -- # true 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:00.437 Cannot find device "nvmf_tgt_br" 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@168 -- # true 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:00.437 Cannot find device "nvmf_tgt_br2" 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # true 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:00.437 Cannot find device "nvmf_br" 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@170 -- # true 00:21:00.437 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:00.437 Cannot find device "nvmf_init_if" 00:21:00.438 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # true 00:21:00.438 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:00.438 Cannot find device "nvmf_init_if2" 00:21:00.438 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # true 00:21:00.438 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:00.438 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:00.438 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@173 -- # true 00:21:00.438 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:00.438 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:00.438 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@174 -- # true 00:21:00.438 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:00.438 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:00.438 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:00.438 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:00.438 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:00.438 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:00.438 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:00.438 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:00.438 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:00.438 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:00.438 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:00.438 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:00.695 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:00.695 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:00.695 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:00.695 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:00.696 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:00.696 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:00.696 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:00.696 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:00.696 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:00.696 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:00.696 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:00.696 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:00.696 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:00.696 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:00.696 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:00.696 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:00.696 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:00.696 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:00.696 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:00.696 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:00.696 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:00.696 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:00.696 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:21:00.696 00:21:00.696 --- 10.0.0.3 ping statistics --- 00:21:00.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.696 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:21:00.696 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:00.696 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:00.696 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.071 ms 00:21:00.696 00:21:00.696 --- 10.0.0.4 ping statistics --- 00:21:00.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.696 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:21:00.696 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:00.696 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:00.696 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:21:00.696 00:21:00.696 --- 10.0.0.1 ping statistics --- 00:21:00.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.696 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:21:00.696 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:00.696 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:00.696 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:21:00.696 00:21:00.696 --- 10.0.0.2 ping statistics --- 00:21:00.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.696 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:21:00.696 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:00.696 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@461 -- # return 0 00:21:00.696 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:00.696 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:00.696 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:00.696 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:00.696 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:00.696 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:00.696 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:00.696 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:21:00.696 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:00.696 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:00.696 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:21:00.696 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:00.696 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=73793 00:21:00.696 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 73793 00:21:00.696 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 73793 ']' 00:21:00.696 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.696 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:00.696 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:00.696 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:00.696 12:19:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:21:00.696 [2024-11-25 12:19:56.454504] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:21:00.696 [2024-11-25 12:19:56.454557] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:00.954 [2024-11-25 12:19:56.595182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:00.954 [2024-11-25 12:19:56.627696] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:00.954 [2024-11-25 12:19:56.627732] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:00.954 [2024-11-25 12:19:56.627739] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:00.954 [2024-11-25 12:19:56.627745] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:00.954 [2024-11-25 12:19:56.627759] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:00.954 [2024-11-25 12:19:56.628373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:00.954 [2024-11-25 12:19:56.628454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:00.954 [2024-11-25 12:19:56.629018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:00.954 [2024-11-25 12:19:56.629039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.522 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:01.522 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:21:01.522 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:01.522 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:01.522 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:21:01.522 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:01.522 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:21:01.522 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode15597 00:21:01.781 [2024-11-25 12:19:57.544259] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:21:01.781 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='2024/11/25 12:19:57 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode15597 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:21:01.781 request: 00:21:01.781 { 00:21:01.781 "method": "nvmf_create_subsystem", 00:21:01.781 "params": { 00:21:01.781 "nqn": "nqn.2016-06.io.spdk:cnode15597", 00:21:01.781 "tgt_name": "foobar" 00:21:01.781 } 00:21:01.781 } 00:21:01.781 Got JSON-RPC error response 00:21:01.781 GoRPCClient: error on JSON-RPC call' 00:21:01.781 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ 2024/11/25 12:19:57 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode15597 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:21:01.781 request: 00:21:01.781 { 00:21:01.781 "method": "nvmf_create_subsystem", 00:21:01.781 "params": { 00:21:01.781 "nqn": "nqn.2016-06.io.spdk:cnode15597", 00:21:01.781 "tgt_name": "foobar" 00:21:01.781 } 00:21:01.781 } 00:21:01.781 Got JSON-RPC error response 00:21:01.781 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:21:01.781 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:21:01.781 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode13103 00:21:02.040 [2024-11-25 12:19:57.728419] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13103: invalid serial number 'SPDKISFASTANDAWESOME' 00:21:02.040 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='2024/11/25 12:19:57 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode13103 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:21:02.040 request: 00:21:02.040 { 00:21:02.040 "method": "nvmf_create_subsystem", 00:21:02.040 "params": { 00:21:02.040 "nqn": "nqn.2016-06.io.spdk:cnode13103", 00:21:02.040 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:21:02.040 } 00:21:02.040 } 00:21:02.040 Got JSON-RPC error response 00:21:02.040 GoRPCClient: error on JSON-RPC call' 00:21:02.040 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ 2024/11/25 12:19:57 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode13103 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:21:02.040 request: 00:21:02.040 { 00:21:02.040 "method": "nvmf_create_subsystem", 00:21:02.040 "params": { 00:21:02.040 "nqn": "nqn.2016-06.io.spdk:cnode13103", 00:21:02.040 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:21:02.040 } 00:21:02.040 } 00:21:02.040 Got JSON-RPC error response 00:21:02.040 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:21:02.040 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:21:02.040 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode11457 00:21:02.299 [2024-11-25 12:19:57.940583] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11457: invalid model number 'SPDK_Controller' 00:21:02.299 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='2024/11/25 12:19:57 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode11457], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:21:02.299 request: 00:21:02.299 { 00:21:02.299 "method": "nvmf_create_subsystem", 00:21:02.299 "params": { 00:21:02.299 "nqn": "nqn.2016-06.io.spdk:cnode11457", 00:21:02.299 "model_number": "SPDK_Controller\u001f" 00:21:02.299 } 00:21:02.299 } 00:21:02.299 Got JSON-RPC error response 00:21:02.299 GoRPCClient: error on JSON-RPC call' 00:21:02.299 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ 2024/11/25 12:19:57 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode11457], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:21:02.299 request: 00:21:02.299 { 00:21:02.299 "method": "nvmf_create_subsystem", 00:21:02.299 "params": { 00:21:02.299 "nqn": "nqn.2016-06.io.spdk:cnode11457", 00:21:02.299 "model_number": "SPDK_Controller\u001f" 00:21:02.299 } 00:21:02.299 } 00:21:02.299 Got JSON-RPC error response 00:21:02.299 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:21:02.299 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:21:02.299 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:21:02.299 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:21:02.299 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:21:02.299 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:21:02.299 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:21:02.299 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.299 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:21:02.299 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:21:02.299 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:21:02.299 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.299 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.299 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:21:02.299 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:21:02.299 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:21:02.299 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.299 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.299 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:21:02.299 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:21:02.299 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:21:02.299 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.299 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.299 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:21:02.299 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:21:02.299 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:21:02.299 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.299 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.299 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:21:02.299 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:21:02.299 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:21:02.299 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.299 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.299 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:21:02.299 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:21:02.299 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:21:02.299 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.299 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.299 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:21:02.299 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:21:02.299 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:21:02.299 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.299 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.300 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:21:02.300 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:21:02.300 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:21:02.300 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.300 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.300 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:21:02.300 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:21:02.300 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:21:02.300 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.300 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.300 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:21:02.300 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:21:02.300 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:21:02.300 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.300 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ h == \- ]] 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'hi0#]p k?drQ~'\''q\M/p'\''M' 00:21:02.300 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s 'hi0#]p k?drQ~'\''q\M/p'\''M' nqn.2016-06.io.spdk:cnode23063 00:21:02.561 [2024-11-25 12:19:58.224795] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23063: invalid serial number 'hi0#]p k?drQ~'q\M/p'M' 00:21:02.561 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='2024/11/25 12:19:58 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode23063 serial_number:hi0#]p k?drQ~'\''q\M/p'\''M], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN hi0#]p k?drQ~'\''q\M/p'\''M 00:21:02.561 request: 00:21:02.561 { 00:21:02.561 "method": "nvmf_create_subsystem", 00:21:02.561 "params": { 00:21:02.561 "nqn": "nqn.2016-06.io.spdk:cnode23063", 00:21:02.561 "serial_number": "hi0#]p k?drQ~'\''q\\M/p'\''M" 00:21:02.561 } 00:21:02.561 } 00:21:02.561 Got JSON-RPC error response 00:21:02.561 GoRPCClient: error on JSON-RPC call' 00:21:02.561 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ 2024/11/25 12:19:58 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode23063 serial_number:hi0#]p k?drQ~'q\M/p'M], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN hi0#]p k?drQ~'q\M/p'M 00:21:02.561 request: 00:21:02.561 { 00:21:02.561 "method": "nvmf_create_subsystem", 00:21:02.561 "params": { 00:21:02.561 "nqn": "nqn.2016-06.io.spdk:cnode23063", 00:21:02.561 "serial_number": "hi0#]p k?drQ~'q\\M/p'M" 00:21:02.561 } 00:21:02.561 } 00:21:02.561 Got JSON-RPC error response 00:21:02.561 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:21:02.561 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:21:02.561 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:21:02.561 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:21:02.561 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:21:02.561 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:21:02.561 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:21:02.561 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.561 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:21:02.561 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:21:02.561 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:21:02.561 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.561 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.561 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:21:02.561 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:21:02.561 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:21:02.561 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.561 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.561 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:21:02.561 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:21:02.561 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:21:02.562 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ O == \- ]] 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'OplG*3PEQ>XXU#7DNMz2pALLiUvJZ#<\#N;^Hx`9' 00:21:02.563 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d 'OplG*3PEQ>XXU#7DNMz2pALLiUvJZ#<\#N;^Hx`9' nqn.2016-06.io.spdk:cnode3283 00:21:02.822 [2024-11-25 12:19:58.581053] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3283: invalid model number 'OplG*3PEQ>XXU#7DNMz2pALLiUvJZ#<\#N;^Hx`9' 00:21:02.822 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='2024/11/25 12:19:58 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:OplG*3PEQ>XXU#7DNMz2pALLiUvJZ#<\#N;^Hx`9 nqn:nqn.2016-06.io.spdk:cnode3283], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN OplG*3PEQ>XXU#7DNMz2pALLiUvJZ#<\#N;^Hx`9 00:21:02.822 request: 00:21:02.822 { 00:21:02.822 "method": "nvmf_create_subsystem", 00:21:02.822 "params": { 00:21:02.822 "nqn": "nqn.2016-06.io.spdk:cnode3283", 00:21:02.822 "model_number": "OplG*3PEQ>XXU#7DNMz2pALL\u007fiUvJZ#<\\#N;^Hx`9" 00:21:02.822 } 00:21:02.822 } 00:21:02.822 Got JSON-RPC error response 00:21:02.822 GoRPCClient: error on JSON-RPC call' 00:21:02.822 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ 2024/11/25 12:19:58 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:OplG*3PEQ>XXU#7DNMz2pALLiUvJZ#<\#N;^Hx`9 nqn:nqn.2016-06.io.spdk:cnode3283], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN OplG*3PEQ>XXU#7DNMz2pALLiUvJZ#<\#N;^Hx`9 00:21:02.822 request: 00:21:02.822 { 00:21:02.822 "method": "nvmf_create_subsystem", 00:21:02.822 "params": { 00:21:02.822 "nqn": "nqn.2016-06.io.spdk:cnode3283", 00:21:02.822 "model_number": "OplG*3PEQ>XXU#7DNMz2pALL\u007fiUvJZ#<\\#N;^Hx`9" 00:21:02.822 } 00:21:02.822 } 00:21:02.822 Got JSON-RPC error response 00:21:02.822 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:21:02.822 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:21:03.079 [2024-11-25 12:19:58.789278] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:03.079 12:19:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:21:03.335 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:21:03.335 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:21:03.335 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:21:03.335 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:21:03.335 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:21:03.592 [2024-11-25 12:19:59.261651] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:21:03.592 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='2024/11/25 12:19:59 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:21:03.592 request: 00:21:03.592 { 00:21:03.592 "method": "nvmf_subsystem_remove_listener", 00:21:03.592 "params": { 00:21:03.592 "nqn": "nqn.2016-06.io.spdk:cnode", 00:21:03.592 "listen_address": { 00:21:03.592 "trtype": "tcp", 00:21:03.592 "traddr": "", 00:21:03.592 "trsvcid": "4421" 00:21:03.592 } 00:21:03.592 } 00:21:03.592 } 00:21:03.592 Got JSON-RPC error response 00:21:03.592 GoRPCClient: error on JSON-RPC call' 00:21:03.592 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ 2024/11/25 12:19:59 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:21:03.592 request: 00:21:03.592 { 00:21:03.592 "method": "nvmf_subsystem_remove_listener", 00:21:03.592 "params": { 00:21:03.592 "nqn": "nqn.2016-06.io.spdk:cnode", 00:21:03.592 "listen_address": { 00:21:03.592 "trtype": "tcp", 00:21:03.592 "traddr": "", 00:21:03.592 "trsvcid": "4421" 00:21:03.592 } 00:21:03.592 } 00:21:03.592 } 00:21:03.592 Got JSON-RPC error response 00:21:03.592 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:21:03.592 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7202 -i 0 00:21:03.592 [2024-11-25 12:19:59.465767] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7202: invalid cntlid range [0-65519] 00:21:03.849 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='2024/11/25 12:19:59 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode7202], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:21:03.849 request: 00:21:03.849 { 00:21:03.849 "method": "nvmf_create_subsystem", 00:21:03.849 "params": { 00:21:03.849 "nqn": "nqn.2016-06.io.spdk:cnode7202", 00:21:03.849 "min_cntlid": 0 00:21:03.849 } 00:21:03.849 } 00:21:03.849 Got JSON-RPC error response 00:21:03.849 GoRPCClient: error on JSON-RPC call' 00:21:03.849 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ 2024/11/25 12:19:59 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode7202], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:21:03.849 request: 00:21:03.849 { 00:21:03.849 "method": "nvmf_create_subsystem", 00:21:03.849 "params": { 00:21:03.849 "nqn": "nqn.2016-06.io.spdk:cnode7202", 00:21:03.849 "min_cntlid": 0 00:21:03.849 } 00:21:03.849 } 00:21:03.849 Got JSON-RPC error response 00:21:03.849 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:21:03.849 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode763 -i 65520 00:21:03.849 [2024-11-25 12:19:59.669978] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode763: invalid cntlid range [65520-65519] 00:21:03.849 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='2024/11/25 12:19:59 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode763], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:21:03.849 request: 00:21:03.849 { 00:21:03.849 "method": "nvmf_create_subsystem", 00:21:03.849 "params": { 00:21:03.849 "nqn": "nqn.2016-06.io.spdk:cnode763", 00:21:03.849 "min_cntlid": 65520 00:21:03.849 } 00:21:03.849 } 00:21:03.849 Got JSON-RPC error response 00:21:03.849 GoRPCClient: error on JSON-RPC call' 00:21:03.849 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ 2024/11/25 12:19:59 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode763], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:21:03.849 request: 00:21:03.849 { 00:21:03.849 "method": "nvmf_create_subsystem", 00:21:03.849 "params": { 00:21:03.849 "nqn": "nqn.2016-06.io.spdk:cnode763", 00:21:03.849 "min_cntlid": 65520 00:21:03.849 } 00:21:03.849 } 00:21:03.849 Got JSON-RPC error response 00:21:03.849 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:21:03.849 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16148 -I 0 00:21:04.106 [2024-11-25 12:19:59.882132] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16148: invalid cntlid range [1-0] 00:21:04.106 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='2024/11/25 12:19:59 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode16148], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:21:04.106 request: 00:21:04.106 { 00:21:04.106 "method": "nvmf_create_subsystem", 00:21:04.106 "params": { 00:21:04.106 "nqn": "nqn.2016-06.io.spdk:cnode16148", 00:21:04.106 "max_cntlid": 0 00:21:04.106 } 00:21:04.106 } 00:21:04.106 Got JSON-RPC error response 00:21:04.106 GoRPCClient: error on JSON-RPC call' 00:21:04.106 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ 2024/11/25 12:19:59 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode16148], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:21:04.106 request: 00:21:04.106 { 00:21:04.106 "method": "nvmf_create_subsystem", 00:21:04.106 "params": { 00:21:04.106 "nqn": "nqn.2016-06.io.spdk:cnode16148", 00:21:04.106 "max_cntlid": 0 00:21:04.106 } 00:21:04.106 } 00:21:04.106 Got JSON-RPC error response 00:21:04.106 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:21:04.106 12:19:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31346 -I 65520 00:21:04.363 [2024-11-25 12:20:00.162406] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31346: invalid cntlid range [1-65520] 00:21:04.363 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='2024/11/25 12:20:00 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode31346], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:21:04.363 request: 00:21:04.363 { 00:21:04.363 "method": "nvmf_create_subsystem", 00:21:04.363 "params": { 00:21:04.363 "nqn": "nqn.2016-06.io.spdk:cnode31346", 00:21:04.363 "max_cntlid": 65520 00:21:04.363 } 00:21:04.363 } 00:21:04.363 Got JSON-RPC error response 00:21:04.363 GoRPCClient: error on JSON-RPC call' 00:21:04.363 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ 2024/11/25 12:20:00 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode31346], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:21:04.363 request: 00:21:04.363 { 00:21:04.363 "method": "nvmf_create_subsystem", 00:21:04.363 "params": { 00:21:04.363 "nqn": "nqn.2016-06.io.spdk:cnode31346", 00:21:04.363 "max_cntlid": 65520 00:21:04.363 } 00:21:04.363 } 00:21:04.363 Got JSON-RPC error response 00:21:04.363 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:21:04.363 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7924 -i 6 -I 5 00:21:04.619 [2024-11-25 12:20:00.374568] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7924: invalid cntlid range [6-5] 00:21:04.619 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='2024/11/25 12:20:00 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode7924], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:21:04.619 request: 00:21:04.619 { 00:21:04.619 "method": "nvmf_create_subsystem", 00:21:04.619 "params": { 00:21:04.619 "nqn": "nqn.2016-06.io.spdk:cnode7924", 00:21:04.619 "min_cntlid": 6, 00:21:04.619 "max_cntlid": 5 00:21:04.619 } 00:21:04.619 } 00:21:04.619 Got JSON-RPC error response 00:21:04.619 GoRPCClient: error on JSON-RPC call' 00:21:04.619 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ 2024/11/25 12:20:00 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode7924], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:21:04.619 request: 00:21:04.619 { 00:21:04.619 "method": "nvmf_create_subsystem", 00:21:04.619 "params": { 00:21:04.619 "nqn": "nqn.2016-06.io.spdk:cnode7924", 00:21:04.619 "min_cntlid": 6, 00:21:04.619 "max_cntlid": 5 00:21:04.619 } 00:21:04.619 } 00:21:04.619 Got JSON-RPC error response 00:21:04.619 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:21:04.619 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:21:04.877 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:21:04.877 { 00:21:04.877 "name": "foobar", 00:21:04.877 "method": "nvmf_delete_target", 00:21:04.877 "req_id": 1 00:21:04.877 } 00:21:04.877 Got JSON-RPC error response 00:21:04.877 response: 00:21:04.877 { 00:21:04.877 "code": -32602, 00:21:04.877 "message": "The specified target doesn'\''t exist, cannot delete it." 00:21:04.877 }' 00:21:04.877 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:21:04.877 { 00:21:04.877 "name": "foobar", 00:21:04.877 "method": "nvmf_delete_target", 00:21:04.877 "req_id": 1 00:21:04.877 } 00:21:04.877 Got JSON-RPC error response 00:21:04.877 response: 00:21:04.877 { 00:21:04.877 "code": -32602, 00:21:04.877 "message": "The specified target doesn't exist, cannot delete it." 00:21:04.877 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:21:04.878 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:21:04.878 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:21:04.878 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:04.878 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:21:04.878 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:04.878 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:21:04.878 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:04.878 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:04.878 rmmod nvme_tcp 00:21:04.878 rmmod nvme_fabrics 00:21:04.878 rmmod nvme_keyring 00:21:04.878 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:04.878 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:21:04.878 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:21:04.878 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 73793 ']' 00:21:04.878 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 73793 00:21:04.878 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 73793 ']' 00:21:04.878 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 73793 00:21:04.878 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:21:04.878 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:04.878 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73793 00:21:04.878 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:04.878 killing process with pid 73793 00:21:04.878 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:04.878 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73793' 00:21:04.878 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 73793 00:21:04.878 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 73793 00:21:04.878 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:04.878 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:04.878 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:04.878 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:21:04.878 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:21:04.878 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:21:04.878 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:04.878 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:04.878 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:04.878 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:05.135 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:05.135 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:05.135 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:05.135 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:05.135 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:05.135 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:05.135 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:05.135 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:05.135 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:05.135 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:05.135 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:05.135 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:05.135 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:05.135 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.135 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:05.135 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.135 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@300 -- # return 0 00:21:05.135 00:21:05.135 real 0m5.012s 00:21:05.135 user 0m19.272s 00:21:05.135 sys 0m0.978s 00:21:05.135 ************************************ 00:21:05.135 END TEST nvmf_invalid 00:21:05.135 ************************************ 00:21:05.135 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:05.135 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:21:05.135 12:20:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:21:05.135 12:20:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:05.135 12:20:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:05.135 12:20:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:05.135 ************************************ 00:21:05.135 START TEST nvmf_connect_stress 00:21:05.135 ************************************ 00:21:05.135 12:20:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:21:05.395 * Looking for test storage... 00:21:05.395 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:05.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.395 --rc genhtml_branch_coverage=1 00:21:05.395 --rc genhtml_function_coverage=1 00:21:05.395 --rc genhtml_legend=1 00:21:05.395 --rc geninfo_all_blocks=1 00:21:05.395 --rc geninfo_unexecuted_blocks=1 00:21:05.395 00:21:05.395 ' 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:05.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.395 --rc genhtml_branch_coverage=1 00:21:05.395 --rc genhtml_function_coverage=1 00:21:05.395 --rc genhtml_legend=1 00:21:05.395 --rc geninfo_all_blocks=1 00:21:05.395 --rc geninfo_unexecuted_blocks=1 00:21:05.395 00:21:05.395 ' 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:05.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.395 --rc genhtml_branch_coverage=1 00:21:05.395 --rc genhtml_function_coverage=1 00:21:05.395 --rc genhtml_legend=1 00:21:05.395 --rc geninfo_all_blocks=1 00:21:05.395 --rc geninfo_unexecuted_blocks=1 00:21:05.395 00:21:05.395 ' 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:05.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:05.395 --rc genhtml_branch_coverage=1 00:21:05.395 --rc genhtml_function_coverage=1 00:21:05.395 --rc genhtml_legend=1 00:21:05.395 --rc geninfo_all_blocks=1 00:21:05.395 --rc geninfo_unexecuted_blocks=1 00:21:05.395 00:21:05.395 ' 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:05.395 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:05.395 Cannot find device "nvmf_init_br" 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # true 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:05.395 Cannot find device "nvmf_init_br2" 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # true 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:05.395 Cannot find device "nvmf_tgt_br" 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@164 -- # true 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:05.395 Cannot find device "nvmf_tgt_br2" 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@165 -- # true 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:05.395 Cannot find device "nvmf_init_br" 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@166 -- # true 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:05.395 Cannot find device "nvmf_init_br2" 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@167 -- # true 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:05.395 Cannot find device "nvmf_tgt_br" 00:21:05.395 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@168 -- # true 00:21:05.396 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:05.396 Cannot find device "nvmf_tgt_br2" 00:21:05.396 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # true 00:21:05.396 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:05.396 Cannot find device "nvmf_br" 00:21:05.396 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@170 -- # true 00:21:05.396 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:05.396 Cannot find device "nvmf_init_if" 00:21:05.396 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # true 00:21:05.396 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:05.396 Cannot find device "nvmf_init_if2" 00:21:05.396 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # true 00:21:05.396 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:05.396 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:05.396 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@173 -- # true 00:21:05.396 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:05.396 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:05.396 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@174 -- # true 00:21:05.396 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:05.653 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:05.653 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:21:05.653 00:21:05.653 --- 10.0.0.3 ping statistics --- 00:21:05.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:05.653 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:05.653 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:05.653 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.030 ms 00:21:05.653 00:21:05.653 --- 10.0.0.4 ping statistics --- 00:21:05.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:05.653 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:05.653 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:05.653 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:21:05.653 00:21:05.653 --- 10.0.0.1 ping statistics --- 00:21:05.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:05.653 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:05.653 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:05.653 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.029 ms 00:21:05.653 00:21:05.653 --- 10.0.0.2 ping statistics --- 00:21:05.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:05.653 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@461 -- # return 0 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=74342 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 74342 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 74342 ']' 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:05.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:05.653 12:20:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:05.653 [2024-11-25 12:20:01.505226] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:21:05.653 [2024-11-25 12:20:01.505287] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:05.909 [2024-11-25 12:20:01.648438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:05.910 [2024-11-25 12:20:01.679829] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:05.910 [2024-11-25 12:20:01.679863] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:05.910 [2024-11-25 12:20:01.679868] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:05.910 [2024-11-25 12:20:01.679872] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:05.910 [2024-11-25 12:20:01.679876] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:05.910 [2024-11-25 12:20:01.680453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:05.910 [2024-11-25 12:20:01.680571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:05.910 [2024-11-25 12:20:01.680781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:06.844 [2024-11-25 12:20:02.415143] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:06.844 [2024-11-25 12:20:02.435258] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:06.844 NULL1 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=74394 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74394 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.844 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:07.102 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.102 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74394 00:21:07.102 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:07.102 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.102 12:20:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:07.359 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.359 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74394 00:21:07.359 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:07.359 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.359 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:07.617 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.617 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74394 00:21:07.617 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:07.617 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.617 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:08.182 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.182 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74394 00:21:08.182 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:08.182 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.182 12:20:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:08.462 12:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.462 12:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74394 00:21:08.462 12:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:08.462 12:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.462 12:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:08.720 12:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.720 12:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74394 00:21:08.720 12:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:08.720 12:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.720 12:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:08.978 12:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.978 12:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74394 00:21:08.978 12:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:08.978 12:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.978 12:20:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:09.238 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.238 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74394 00:21:09.238 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:09.238 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.238 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:09.806 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.806 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74394 00:21:09.806 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:09.806 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.806 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:10.064 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.064 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74394 00:21:10.064 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:10.064 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.064 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:10.322 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.322 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74394 00:21:10.322 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:10.322 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.322 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:10.581 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.581 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74394 00:21:10.581 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:10.581 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.581 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:10.840 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.840 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74394 00:21:10.840 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:10.840 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.840 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:11.407 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.407 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74394 00:21:11.407 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:11.407 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.407 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:11.666 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.666 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74394 00:21:11.666 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:11.666 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.666 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:11.924 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.924 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74394 00:21:11.924 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:11.924 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.924 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:12.183 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.183 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74394 00:21:12.183 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:12.183 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.183 12:20:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:12.443 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.443 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74394 00:21:12.443 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:12.443 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.443 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:12.790 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.790 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74394 00:21:12.790 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:12.790 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.790 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:13.053 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.053 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74394 00:21:13.053 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:13.053 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.053 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:13.621 12:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.621 12:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74394 00:21:13.621 12:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:13.621 12:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.621 12:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:13.878 12:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.878 12:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74394 00:21:13.878 12:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:13.878 12:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.878 12:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:14.135 12:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.135 12:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74394 00:21:14.135 12:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:14.135 12:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.135 12:20:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:14.393 12:20:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.393 12:20:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74394 00:21:14.393 12:20:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:14.393 12:20:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.393 12:20:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:14.651 12:20:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.651 12:20:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74394 00:21:14.651 12:20:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:14.651 12:20:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.651 12:20:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:15.219 12:20:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.219 12:20:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74394 00:21:15.219 12:20:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:15.219 12:20:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.219 12:20:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:15.480 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.480 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74394 00:21:15.480 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:15.480 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.480 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:15.739 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.739 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74394 00:21:15.739 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:15.739 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.739 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:15.999 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.999 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74394 00:21:15.999 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:15.999 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.999 12:20:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:16.258 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.258 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74394 00:21:16.258 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:16.258 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.258 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:16.966 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.966 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74394 00:21:16.966 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:16.966 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.966 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:16.966 Testing NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:17.272 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.272 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 74394 00:21:17.272 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (74394) - No such process 00:21:17.272 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 74394 00:21:17.272 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:21:17.272 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:21:17.272 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:21:17.272 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:17.272 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:21:17.272 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:17.272 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:21:17.272 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:17.272 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:17.272 rmmod nvme_tcp 00:21:17.272 rmmod nvme_fabrics 00:21:17.272 rmmod nvme_keyring 00:21:17.272 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:17.272 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:21:17.272 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:21:17.272 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 74342 ']' 00:21:17.272 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 74342 00:21:17.272 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 74342 ']' 00:21:17.272 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 74342 00:21:17.272 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:21:17.272 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:17.272 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74342 00:21:17.272 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:17.272 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:17.272 killing process with pid 74342 00:21:17.272 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74342' 00:21:17.272 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 74342 00:21:17.272 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 74342 00:21:17.272 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:17.272 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:17.272 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:17.272 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:21:17.272 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:21:17.272 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:21:17.272 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:17.272 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:17.272 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:17.272 12:20:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:17.272 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:17.272 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:17.272 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:17.272 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:17.272 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:17.272 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:17.272 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:17.272 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:17.272 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:17.272 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:17.272 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:17.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:17.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:17.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:17.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:17.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:17.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@300 -- # return 0 00:21:17.532 ************************************ 00:21:17.532 END TEST nvmf_connect_stress 00:21:17.532 ************************************ 00:21:17.532 00:21:17.532 real 0m12.208s 00:21:17.532 user 0m41.087s 00:21:17.532 sys 0m2.636s 00:21:17.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:17.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:21:17.532 12:20:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:21:17.532 12:20:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:17.532 12:20:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:17.532 12:20:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:17.532 ************************************ 00:21:17.532 START TEST nvmf_fused_ordering 00:21:17.532 ************************************ 00:21:17.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:21:17.532 * Looking for test storage... 00:21:17.532 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:17.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:17.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:21:17.532 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:17.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.533 --rc genhtml_branch_coverage=1 00:21:17.533 --rc genhtml_function_coverage=1 00:21:17.533 --rc genhtml_legend=1 00:21:17.533 --rc geninfo_all_blocks=1 00:21:17.533 --rc geninfo_unexecuted_blocks=1 00:21:17.533 00:21:17.533 ' 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:17.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.533 --rc genhtml_branch_coverage=1 00:21:17.533 --rc genhtml_function_coverage=1 00:21:17.533 --rc genhtml_legend=1 00:21:17.533 --rc geninfo_all_blocks=1 00:21:17.533 --rc geninfo_unexecuted_blocks=1 00:21:17.533 00:21:17.533 ' 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:17.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.533 --rc genhtml_branch_coverage=1 00:21:17.533 --rc genhtml_function_coverage=1 00:21:17.533 --rc genhtml_legend=1 00:21:17.533 --rc geninfo_all_blocks=1 00:21:17.533 --rc geninfo_unexecuted_blocks=1 00:21:17.533 00:21:17.533 ' 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:17.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.533 --rc genhtml_branch_coverage=1 00:21:17.533 --rc genhtml_function_coverage=1 00:21:17.533 --rc genhtml_legend=1 00:21:17.533 --rc geninfo_all_blocks=1 00:21:17.533 --rc geninfo_unexecuted_blocks=1 00:21:17.533 00:21:17.533 ' 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:17.533 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:17.533 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:17.534 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:17.534 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:17.534 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:21:17.534 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:17.534 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:17.534 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:17.534 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:17.534 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:17.534 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:17.534 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:17.534 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:17.534 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:17.534 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:17.534 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:17.534 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:17.534 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:17.534 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:17.534 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:17.534 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:17.534 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:17.534 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:17.534 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:17.534 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:17.534 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:17.534 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:17.534 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:17.534 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:17.534 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:17.534 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:17.534 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:17.534 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:17.534 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:17.534 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:17.534 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:17.792 Cannot find device "nvmf_init_br" 00:21:17.792 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # true 00:21:17.792 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:17.792 Cannot find device "nvmf_init_br2" 00:21:17.792 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # true 00:21:17.792 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:17.792 Cannot find device "nvmf_tgt_br" 00:21:17.792 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@164 -- # true 00:21:17.792 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:17.792 Cannot find device "nvmf_tgt_br2" 00:21:17.792 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@165 -- # true 00:21:17.792 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:17.792 Cannot find device "nvmf_init_br" 00:21:17.792 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@166 -- # true 00:21:17.792 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:17.792 Cannot find device "nvmf_init_br2" 00:21:17.792 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@167 -- # true 00:21:17.792 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:17.792 Cannot find device "nvmf_tgt_br" 00:21:17.792 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@168 -- # true 00:21:17.792 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:17.792 Cannot find device "nvmf_tgt_br2" 00:21:17.792 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # true 00:21:17.792 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:17.792 Cannot find device "nvmf_br" 00:21:17.792 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@170 -- # true 00:21:17.792 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:17.792 Cannot find device "nvmf_init_if" 00:21:17.792 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # true 00:21:17.792 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:17.792 Cannot find device "nvmf_init_if2" 00:21:17.792 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # true 00:21:17.792 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:17.792 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:17.792 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@173 -- # true 00:21:17.792 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:17.792 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:17.792 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@174 -- # true 00:21:17.792 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:17.793 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:17.793 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:17.793 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:17.793 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:17.793 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:17.793 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:17.793 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:17.793 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:17.793 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:17.793 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:17.793 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:17.793 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:17.793 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:17.793 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:17.793 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:17.793 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:17.793 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:17.793 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:17.793 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:17.793 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:17.793 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:17.793 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:17.793 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:17.793 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:17.793 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:17.793 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:17.793 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:17.793 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:17.793 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:17.793 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:17.793 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:17.793 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:17.793 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:17.793 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:21:17.793 00:21:17.793 --- 10.0.0.3 ping statistics --- 00:21:17.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:17.793 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:21:17.793 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:17.793 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:17.793 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:21:17.793 00:21:17.793 --- 10.0.0.4 ping statistics --- 00:21:17.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:17.793 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:21:17.793 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:18.052 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:18.052 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:21:18.052 00:21:18.052 --- 10.0.0.1 ping statistics --- 00:21:18.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.052 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:21:18.052 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:18.052 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:18.052 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:21:18.052 00:21:18.052 --- 10.0.0.2 ping statistics --- 00:21:18.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:18.052 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:21:18.052 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:18.052 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@461 -- # return 0 00:21:18.052 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:18.052 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:18.052 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:18.052 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:18.052 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:18.052 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:18.052 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:18.052 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:21:18.052 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:18.052 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:18.052 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:21:18.052 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=74766 00:21:18.052 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 74766 00:21:18.052 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 74766 ']' 00:21:18.052 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:18.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:18.052 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:18.052 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:18.052 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:18.052 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:21:18.052 12:20:13 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:18.052 [2024-11-25 12:20:13.733336] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:21:18.052 [2024-11-25 12:20:13.733389] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:18.052 [2024-11-25 12:20:13.871416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.052 [2024-11-25 12:20:13.906899] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:18.052 [2024-11-25 12:20:13.907048] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:18.052 [2024-11-25 12:20:13.907124] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:18.052 [2024-11-25 12:20:13.907204] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:18.052 [2024-11-25 12:20:13.907228] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:18.052 [2024-11-25 12:20:13.907551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:18.986 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:18.986 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:21:18.986 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:18.986 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:18.986 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:21:18.986 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:18.986 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:18.986 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.986 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:21:18.986 [2024-11-25 12:20:14.609159] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:18.986 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.986 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:21:18.986 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.986 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:21:18.986 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.986 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:18.986 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.986 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:21:18.986 [2024-11-25 12:20:14.625224] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:18.986 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.986 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:21:18.986 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.986 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:21:18.986 NULL1 00:21:18.986 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.986 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:21:18.986 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.986 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:21:18.986 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.986 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:21:18.986 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.986 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:21:18.986 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.986 12:20:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:18.986 [2024-11-25 12:20:14.673018] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:21:18.986 [2024-11-25 12:20:14.673048] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74816 ] 00:21:19.244 Attached to nqn.2016-06.io.spdk:cnode1 00:21:19.244 Namespace ID: 1 size: 1GB 00:21:19.244 fused_ordering(0) 00:21:19.244 fused_ordering(1) 00:21:19.244 fused_ordering(2) 00:21:19.244 fused_ordering(3) 00:21:19.244 fused_ordering(4) 00:21:19.244 fused_ordering(5) 00:21:19.244 fused_ordering(6) 00:21:19.244 fused_ordering(7) 00:21:19.244 fused_ordering(8) 00:21:19.244 fused_ordering(9) 00:21:19.244 fused_ordering(10) 00:21:19.244 fused_ordering(11) 00:21:19.244 fused_ordering(12) 00:21:19.244 fused_ordering(13) 00:21:19.244 fused_ordering(14) 00:21:19.244 fused_ordering(15) 00:21:19.244 fused_ordering(16) 00:21:19.244 fused_ordering(17) 00:21:19.244 fused_ordering(18) 00:21:19.244 fused_ordering(19) 00:21:19.244 fused_ordering(20) 00:21:19.244 fused_ordering(21) 00:21:19.244 fused_ordering(22) 00:21:19.244 fused_ordering(23) 00:21:19.244 fused_ordering(24) 00:21:19.244 fused_ordering(25) 00:21:19.244 fused_ordering(26) 00:21:19.244 fused_ordering(27) 00:21:19.244 fused_ordering(28) 00:21:19.244 fused_ordering(29) 00:21:19.244 fused_ordering(30) 00:21:19.244 fused_ordering(31) 00:21:19.244 fused_ordering(32) 00:21:19.244 fused_ordering(33) 00:21:19.244 fused_ordering(34) 00:21:19.244 fused_ordering(35) 00:21:19.244 fused_ordering(36) 00:21:19.244 fused_ordering(37) 00:21:19.244 fused_ordering(38) 00:21:19.244 fused_ordering(39) 00:21:19.244 fused_ordering(40) 00:21:19.244 fused_ordering(41) 00:21:19.244 fused_ordering(42) 00:21:19.244 fused_ordering(43) 00:21:19.244 fused_ordering(44) 00:21:19.244 fused_ordering(45) 00:21:19.244 fused_ordering(46) 00:21:19.244 fused_ordering(47) 00:21:19.244 fused_ordering(48) 00:21:19.244 fused_ordering(49) 00:21:19.244 fused_ordering(50) 00:21:19.244 fused_ordering(51) 00:21:19.244 fused_ordering(52) 00:21:19.244 fused_ordering(53) 00:21:19.244 fused_ordering(54) 00:21:19.244 fused_ordering(55) 00:21:19.244 fused_ordering(56) 00:21:19.244 fused_ordering(57) 00:21:19.244 fused_ordering(58) 00:21:19.244 fused_ordering(59) 00:21:19.244 fused_ordering(60) 00:21:19.244 fused_ordering(61) 00:21:19.244 fused_ordering(62) 00:21:19.244 fused_ordering(63) 00:21:19.244 fused_ordering(64) 00:21:19.244 fused_ordering(65) 00:21:19.244 fused_ordering(66) 00:21:19.244 fused_ordering(67) 00:21:19.244 fused_ordering(68) 00:21:19.244 fused_ordering(69) 00:21:19.244 fused_ordering(70) 00:21:19.244 fused_ordering(71) 00:21:19.244 fused_ordering(72) 00:21:19.244 fused_ordering(73) 00:21:19.244 fused_ordering(74) 00:21:19.244 fused_ordering(75) 00:21:19.244 fused_ordering(76) 00:21:19.244 fused_ordering(77) 00:21:19.244 fused_ordering(78) 00:21:19.244 fused_ordering(79) 00:21:19.244 fused_ordering(80) 00:21:19.244 fused_ordering(81) 00:21:19.244 fused_ordering(82) 00:21:19.244 fused_ordering(83) 00:21:19.244 fused_ordering(84) 00:21:19.244 fused_ordering(85) 00:21:19.244 fused_ordering(86) 00:21:19.245 fused_ordering(87) 00:21:19.245 fused_ordering(88) 00:21:19.245 fused_ordering(89) 00:21:19.245 fused_ordering(90) 00:21:19.245 fused_ordering(91) 00:21:19.245 fused_ordering(92) 00:21:19.245 fused_ordering(93) 00:21:19.245 fused_ordering(94) 00:21:19.245 fused_ordering(95) 00:21:19.245 fused_ordering(96) 00:21:19.245 fused_ordering(97) 00:21:19.245 fused_ordering(98) 00:21:19.245 fused_ordering(99) 00:21:19.245 fused_ordering(100) 00:21:19.245 fused_ordering(101) 00:21:19.245 fused_ordering(102) 00:21:19.245 fused_ordering(103) 00:21:19.245 fused_ordering(104) 00:21:19.245 fused_ordering(105) 00:21:19.245 fused_ordering(106) 00:21:19.245 fused_ordering(107) 00:21:19.245 fused_ordering(108) 00:21:19.245 fused_ordering(109) 00:21:19.245 fused_ordering(110) 00:21:19.245 fused_ordering(111) 00:21:19.245 fused_ordering(112) 00:21:19.245 fused_ordering(113) 00:21:19.245 fused_ordering(114) 00:21:19.245 fused_ordering(115) 00:21:19.245 fused_ordering(116) 00:21:19.245 fused_ordering(117) 00:21:19.245 fused_ordering(118) 00:21:19.245 fused_ordering(119) 00:21:19.245 fused_ordering(120) 00:21:19.245 fused_ordering(121) 00:21:19.245 fused_ordering(122) 00:21:19.245 fused_ordering(123) 00:21:19.245 fused_ordering(124) 00:21:19.245 fused_ordering(125) 00:21:19.245 fused_ordering(126) 00:21:19.245 fused_ordering(127) 00:21:19.245 fused_ordering(128) 00:21:19.245 fused_ordering(129) 00:21:19.245 fused_ordering(130) 00:21:19.245 fused_ordering(131) 00:21:19.245 fused_ordering(132) 00:21:19.245 fused_ordering(133) 00:21:19.245 fused_ordering(134) 00:21:19.245 fused_ordering(135) 00:21:19.245 fused_ordering(136) 00:21:19.245 fused_ordering(137) 00:21:19.245 fused_ordering(138) 00:21:19.245 fused_ordering(139) 00:21:19.245 fused_ordering(140) 00:21:19.245 fused_ordering(141) 00:21:19.245 fused_ordering(142) 00:21:19.245 fused_ordering(143) 00:21:19.245 fused_ordering(144) 00:21:19.245 fused_ordering(145) 00:21:19.245 fused_ordering(146) 00:21:19.245 fused_ordering(147) 00:21:19.245 fused_ordering(148) 00:21:19.245 fused_ordering(149) 00:21:19.245 fused_ordering(150) 00:21:19.245 fused_ordering(151) 00:21:19.245 fused_ordering(152) 00:21:19.245 fused_ordering(153) 00:21:19.245 fused_ordering(154) 00:21:19.245 fused_ordering(155) 00:21:19.245 fused_ordering(156) 00:21:19.245 fused_ordering(157) 00:21:19.245 fused_ordering(158) 00:21:19.245 fused_ordering(159) 00:21:19.245 fused_ordering(160) 00:21:19.245 fused_ordering(161) 00:21:19.245 fused_ordering(162) 00:21:19.245 fused_ordering(163) 00:21:19.245 fused_ordering(164) 00:21:19.245 fused_ordering(165) 00:21:19.245 fused_ordering(166) 00:21:19.245 fused_ordering(167) 00:21:19.245 fused_ordering(168) 00:21:19.245 fused_ordering(169) 00:21:19.245 fused_ordering(170) 00:21:19.245 fused_ordering(171) 00:21:19.245 fused_ordering(172) 00:21:19.245 fused_ordering(173) 00:21:19.245 fused_ordering(174) 00:21:19.245 fused_ordering(175) 00:21:19.245 fused_ordering(176) 00:21:19.245 fused_ordering(177) 00:21:19.245 fused_ordering(178) 00:21:19.245 fused_ordering(179) 00:21:19.245 fused_ordering(180) 00:21:19.245 fused_ordering(181) 00:21:19.245 fused_ordering(182) 00:21:19.245 fused_ordering(183) 00:21:19.245 fused_ordering(184) 00:21:19.245 fused_ordering(185) 00:21:19.245 fused_ordering(186) 00:21:19.245 fused_ordering(187) 00:21:19.245 fused_ordering(188) 00:21:19.245 fused_ordering(189) 00:21:19.245 fused_ordering(190) 00:21:19.245 fused_ordering(191) 00:21:19.245 fused_ordering(192) 00:21:19.245 fused_ordering(193) 00:21:19.245 fused_ordering(194) 00:21:19.245 fused_ordering(195) 00:21:19.245 fused_ordering(196) 00:21:19.245 fused_ordering(197) 00:21:19.245 fused_ordering(198) 00:21:19.245 fused_ordering(199) 00:21:19.245 fused_ordering(200) 00:21:19.245 fused_ordering(201) 00:21:19.245 fused_ordering(202) 00:21:19.245 fused_ordering(203) 00:21:19.245 fused_ordering(204) 00:21:19.245 fused_ordering(205) 00:21:19.503 fused_ordering(206) 00:21:19.503 fused_ordering(207) 00:21:19.503 fused_ordering(208) 00:21:19.503 fused_ordering(209) 00:21:19.503 fused_ordering(210) 00:21:19.503 fused_ordering(211) 00:21:19.503 fused_ordering(212) 00:21:19.503 fused_ordering(213) 00:21:19.503 fused_ordering(214) 00:21:19.503 fused_ordering(215) 00:21:19.503 fused_ordering(216) 00:21:19.503 fused_ordering(217) 00:21:19.503 fused_ordering(218) 00:21:19.503 fused_ordering(219) 00:21:19.503 fused_ordering(220) 00:21:19.503 fused_ordering(221) 00:21:19.503 fused_ordering(222) 00:21:19.503 fused_ordering(223) 00:21:19.503 fused_ordering(224) 00:21:19.503 fused_ordering(225) 00:21:19.503 fused_ordering(226) 00:21:19.503 fused_ordering(227) 00:21:19.503 fused_ordering(228) 00:21:19.503 fused_ordering(229) 00:21:19.503 fused_ordering(230) 00:21:19.503 fused_ordering(231) 00:21:19.503 fused_ordering(232) 00:21:19.503 fused_ordering(233) 00:21:19.503 fused_ordering(234) 00:21:19.503 fused_ordering(235) 00:21:19.503 fused_ordering(236) 00:21:19.503 fused_ordering(237) 00:21:19.503 fused_ordering(238) 00:21:19.503 fused_ordering(239) 00:21:19.503 fused_ordering(240) 00:21:19.503 fused_ordering(241) 00:21:19.503 fused_ordering(242) 00:21:19.503 fused_ordering(243) 00:21:19.503 fused_ordering(244) 00:21:19.503 fused_ordering(245) 00:21:19.503 fused_ordering(246) 00:21:19.503 fused_ordering(247) 00:21:19.503 fused_ordering(248) 00:21:19.503 fused_ordering(249) 00:21:19.503 fused_ordering(250) 00:21:19.503 fused_ordering(251) 00:21:19.503 fused_ordering(252) 00:21:19.503 fused_ordering(253) 00:21:19.503 fused_ordering(254) 00:21:19.503 fused_ordering(255) 00:21:19.503 fused_ordering(256) 00:21:19.503 fused_ordering(257) 00:21:19.503 fused_ordering(258) 00:21:19.503 fused_ordering(259) 00:21:19.503 fused_ordering(260) 00:21:19.503 fused_ordering(261) 00:21:19.503 fused_ordering(262) 00:21:19.503 fused_ordering(263) 00:21:19.503 fused_ordering(264) 00:21:19.503 fused_ordering(265) 00:21:19.503 fused_ordering(266) 00:21:19.503 fused_ordering(267) 00:21:19.503 fused_ordering(268) 00:21:19.503 fused_ordering(269) 00:21:19.503 fused_ordering(270) 00:21:19.503 fused_ordering(271) 00:21:19.503 fused_ordering(272) 00:21:19.503 fused_ordering(273) 00:21:19.503 fused_ordering(274) 00:21:19.503 fused_ordering(275) 00:21:19.503 fused_ordering(276) 00:21:19.503 fused_ordering(277) 00:21:19.503 fused_ordering(278) 00:21:19.503 fused_ordering(279) 00:21:19.503 fused_ordering(280) 00:21:19.503 fused_ordering(281) 00:21:19.503 fused_ordering(282) 00:21:19.503 fused_ordering(283) 00:21:19.503 fused_ordering(284) 00:21:19.503 fused_ordering(285) 00:21:19.503 fused_ordering(286) 00:21:19.503 fused_ordering(287) 00:21:19.503 fused_ordering(288) 00:21:19.503 fused_ordering(289) 00:21:19.503 fused_ordering(290) 00:21:19.503 fused_ordering(291) 00:21:19.503 fused_ordering(292) 00:21:19.503 fused_ordering(293) 00:21:19.503 fused_ordering(294) 00:21:19.503 fused_ordering(295) 00:21:19.503 fused_ordering(296) 00:21:19.503 fused_ordering(297) 00:21:19.503 fused_ordering(298) 00:21:19.503 fused_ordering(299) 00:21:19.503 fused_ordering(300) 00:21:19.503 fused_ordering(301) 00:21:19.503 fused_ordering(302) 00:21:19.503 fused_ordering(303) 00:21:19.503 fused_ordering(304) 00:21:19.503 fused_ordering(305) 00:21:19.503 fused_ordering(306) 00:21:19.503 fused_ordering(307) 00:21:19.503 fused_ordering(308) 00:21:19.503 fused_ordering(309) 00:21:19.503 fused_ordering(310) 00:21:19.503 fused_ordering(311) 00:21:19.503 fused_ordering(312) 00:21:19.503 fused_ordering(313) 00:21:19.503 fused_ordering(314) 00:21:19.503 fused_ordering(315) 00:21:19.503 fused_ordering(316) 00:21:19.503 fused_ordering(317) 00:21:19.503 fused_ordering(318) 00:21:19.504 fused_ordering(319) 00:21:19.504 fused_ordering(320) 00:21:19.504 fused_ordering(321) 00:21:19.504 fused_ordering(322) 00:21:19.504 fused_ordering(323) 00:21:19.504 fused_ordering(324) 00:21:19.504 fused_ordering(325) 00:21:19.504 fused_ordering(326) 00:21:19.504 fused_ordering(327) 00:21:19.504 fused_ordering(328) 00:21:19.504 fused_ordering(329) 00:21:19.504 fused_ordering(330) 00:21:19.504 fused_ordering(331) 00:21:19.504 fused_ordering(332) 00:21:19.504 fused_ordering(333) 00:21:19.504 fused_ordering(334) 00:21:19.504 fused_ordering(335) 00:21:19.504 fused_ordering(336) 00:21:19.504 fused_ordering(337) 00:21:19.504 fused_ordering(338) 00:21:19.504 fused_ordering(339) 00:21:19.504 fused_ordering(340) 00:21:19.504 fused_ordering(341) 00:21:19.504 fused_ordering(342) 00:21:19.504 fused_ordering(343) 00:21:19.504 fused_ordering(344) 00:21:19.504 fused_ordering(345) 00:21:19.504 fused_ordering(346) 00:21:19.504 fused_ordering(347) 00:21:19.504 fused_ordering(348) 00:21:19.504 fused_ordering(349) 00:21:19.504 fused_ordering(350) 00:21:19.504 fused_ordering(351) 00:21:19.504 fused_ordering(352) 00:21:19.504 fused_ordering(353) 00:21:19.504 fused_ordering(354) 00:21:19.504 fused_ordering(355) 00:21:19.504 fused_ordering(356) 00:21:19.504 fused_ordering(357) 00:21:19.504 fused_ordering(358) 00:21:19.504 fused_ordering(359) 00:21:19.504 fused_ordering(360) 00:21:19.504 fused_ordering(361) 00:21:19.504 fused_ordering(362) 00:21:19.504 fused_ordering(363) 00:21:19.504 fused_ordering(364) 00:21:19.504 fused_ordering(365) 00:21:19.504 fused_ordering(366) 00:21:19.504 fused_ordering(367) 00:21:19.504 fused_ordering(368) 00:21:19.504 fused_ordering(369) 00:21:19.504 fused_ordering(370) 00:21:19.504 fused_ordering(371) 00:21:19.504 fused_ordering(372) 00:21:19.504 fused_ordering(373) 00:21:19.504 fused_ordering(374) 00:21:19.504 fused_ordering(375) 00:21:19.504 fused_ordering(376) 00:21:19.504 fused_ordering(377) 00:21:19.504 fused_ordering(378) 00:21:19.504 fused_ordering(379) 00:21:19.504 fused_ordering(380) 00:21:19.504 fused_ordering(381) 00:21:19.504 fused_ordering(382) 00:21:19.504 fused_ordering(383) 00:21:19.504 fused_ordering(384) 00:21:19.504 fused_ordering(385) 00:21:19.504 fused_ordering(386) 00:21:19.504 fused_ordering(387) 00:21:19.504 fused_ordering(388) 00:21:19.504 fused_ordering(389) 00:21:19.504 fused_ordering(390) 00:21:19.504 fused_ordering(391) 00:21:19.504 fused_ordering(392) 00:21:19.504 fused_ordering(393) 00:21:19.504 fused_ordering(394) 00:21:19.504 fused_ordering(395) 00:21:19.504 fused_ordering(396) 00:21:19.504 fused_ordering(397) 00:21:19.504 fused_ordering(398) 00:21:19.504 fused_ordering(399) 00:21:19.504 fused_ordering(400) 00:21:19.504 fused_ordering(401) 00:21:19.504 fused_ordering(402) 00:21:19.504 fused_ordering(403) 00:21:19.504 fused_ordering(404) 00:21:19.504 fused_ordering(405) 00:21:19.504 fused_ordering(406) 00:21:19.504 fused_ordering(407) 00:21:19.504 fused_ordering(408) 00:21:19.504 fused_ordering(409) 00:21:19.504 fused_ordering(410) 00:21:19.762 fused_ordering(411) 00:21:19.762 fused_ordering(412) 00:21:19.762 fused_ordering(413) 00:21:19.762 fused_ordering(414) 00:21:19.762 fused_ordering(415) 00:21:19.762 fused_ordering(416) 00:21:19.762 fused_ordering(417) 00:21:19.762 fused_ordering(418) 00:21:19.762 fused_ordering(419) 00:21:19.762 fused_ordering(420) 00:21:19.762 fused_ordering(421) 00:21:19.762 fused_ordering(422) 00:21:19.762 fused_ordering(423) 00:21:19.762 fused_ordering(424) 00:21:19.762 fused_ordering(425) 00:21:19.762 fused_ordering(426) 00:21:19.762 fused_ordering(427) 00:21:19.762 fused_ordering(428) 00:21:19.762 fused_ordering(429) 00:21:19.762 fused_ordering(430) 00:21:19.762 fused_ordering(431) 00:21:19.762 fused_ordering(432) 00:21:19.762 fused_ordering(433) 00:21:19.762 fused_ordering(434) 00:21:19.762 fused_ordering(435) 00:21:19.762 fused_ordering(436) 00:21:19.762 fused_ordering(437) 00:21:19.762 fused_ordering(438) 00:21:19.762 fused_ordering(439) 00:21:19.762 fused_ordering(440) 00:21:19.762 fused_ordering(441) 00:21:19.762 fused_ordering(442) 00:21:19.762 fused_ordering(443) 00:21:19.762 fused_ordering(444) 00:21:19.762 fused_ordering(445) 00:21:19.762 fused_ordering(446) 00:21:19.762 fused_ordering(447) 00:21:19.762 fused_ordering(448) 00:21:19.762 fused_ordering(449) 00:21:19.762 fused_ordering(450) 00:21:19.762 fused_ordering(451) 00:21:19.762 fused_ordering(452) 00:21:19.762 fused_ordering(453) 00:21:19.762 fused_ordering(454) 00:21:19.762 fused_ordering(455) 00:21:19.762 fused_ordering(456) 00:21:19.762 fused_ordering(457) 00:21:19.762 fused_ordering(458) 00:21:19.762 fused_ordering(459) 00:21:19.762 fused_ordering(460) 00:21:19.762 fused_ordering(461) 00:21:19.762 fused_ordering(462) 00:21:19.762 fused_ordering(463) 00:21:19.762 fused_ordering(464) 00:21:19.762 fused_ordering(465) 00:21:19.762 fused_ordering(466) 00:21:19.762 fused_ordering(467) 00:21:19.762 fused_ordering(468) 00:21:19.762 fused_ordering(469) 00:21:19.762 fused_ordering(470) 00:21:19.762 fused_ordering(471) 00:21:19.762 fused_ordering(472) 00:21:19.762 fused_ordering(473) 00:21:19.762 fused_ordering(474) 00:21:19.762 fused_ordering(475) 00:21:19.762 fused_ordering(476) 00:21:19.762 fused_ordering(477) 00:21:19.762 fused_ordering(478) 00:21:19.762 fused_ordering(479) 00:21:19.762 fused_ordering(480) 00:21:19.762 fused_ordering(481) 00:21:19.762 fused_ordering(482) 00:21:19.762 fused_ordering(483) 00:21:19.762 fused_ordering(484) 00:21:19.762 fused_ordering(485) 00:21:19.762 fused_ordering(486) 00:21:19.762 fused_ordering(487) 00:21:19.762 fused_ordering(488) 00:21:19.762 fused_ordering(489) 00:21:19.762 fused_ordering(490) 00:21:19.762 fused_ordering(491) 00:21:19.762 fused_ordering(492) 00:21:19.762 fused_ordering(493) 00:21:19.762 fused_ordering(494) 00:21:19.762 fused_ordering(495) 00:21:19.762 fused_ordering(496) 00:21:19.762 fused_ordering(497) 00:21:19.762 fused_ordering(498) 00:21:19.762 fused_ordering(499) 00:21:19.762 fused_ordering(500) 00:21:19.762 fused_ordering(501) 00:21:19.762 fused_ordering(502) 00:21:19.762 fused_ordering(503) 00:21:19.762 fused_ordering(504) 00:21:19.762 fused_ordering(505) 00:21:19.762 fused_ordering(506) 00:21:19.762 fused_ordering(507) 00:21:19.762 fused_ordering(508) 00:21:19.762 fused_ordering(509) 00:21:19.762 fused_ordering(510) 00:21:19.762 fused_ordering(511) 00:21:19.762 fused_ordering(512) 00:21:19.762 fused_ordering(513) 00:21:19.762 fused_ordering(514) 00:21:19.762 fused_ordering(515) 00:21:19.762 fused_ordering(516) 00:21:19.762 fused_ordering(517) 00:21:19.762 fused_ordering(518) 00:21:19.762 fused_ordering(519) 00:21:19.762 fused_ordering(520) 00:21:19.762 fused_ordering(521) 00:21:19.762 fused_ordering(522) 00:21:19.762 fused_ordering(523) 00:21:19.762 fused_ordering(524) 00:21:19.762 fused_ordering(525) 00:21:19.762 fused_ordering(526) 00:21:19.762 fused_ordering(527) 00:21:19.762 fused_ordering(528) 00:21:19.762 fused_ordering(529) 00:21:19.762 fused_ordering(530) 00:21:19.762 fused_ordering(531) 00:21:19.762 fused_ordering(532) 00:21:19.762 fused_ordering(533) 00:21:19.762 fused_ordering(534) 00:21:19.762 fused_ordering(535) 00:21:19.762 fused_ordering(536) 00:21:19.762 fused_ordering(537) 00:21:19.762 fused_ordering(538) 00:21:19.762 fused_ordering(539) 00:21:19.762 fused_ordering(540) 00:21:19.762 fused_ordering(541) 00:21:19.762 fused_ordering(542) 00:21:19.762 fused_ordering(543) 00:21:19.762 fused_ordering(544) 00:21:19.762 fused_ordering(545) 00:21:19.762 fused_ordering(546) 00:21:19.762 fused_ordering(547) 00:21:19.762 fused_ordering(548) 00:21:19.762 fused_ordering(549) 00:21:19.762 fused_ordering(550) 00:21:19.762 fused_ordering(551) 00:21:19.762 fused_ordering(552) 00:21:19.762 fused_ordering(553) 00:21:19.762 fused_ordering(554) 00:21:19.762 fused_ordering(555) 00:21:19.762 fused_ordering(556) 00:21:19.762 fused_ordering(557) 00:21:19.762 fused_ordering(558) 00:21:19.762 fused_ordering(559) 00:21:19.762 fused_ordering(560) 00:21:19.762 fused_ordering(561) 00:21:19.762 fused_ordering(562) 00:21:19.762 fused_ordering(563) 00:21:19.762 fused_ordering(564) 00:21:19.762 fused_ordering(565) 00:21:19.762 fused_ordering(566) 00:21:19.762 fused_ordering(567) 00:21:19.762 fused_ordering(568) 00:21:19.762 fused_ordering(569) 00:21:19.762 fused_ordering(570) 00:21:19.762 fused_ordering(571) 00:21:19.762 fused_ordering(572) 00:21:19.762 fused_ordering(573) 00:21:19.762 fused_ordering(574) 00:21:19.762 fused_ordering(575) 00:21:19.762 fused_ordering(576) 00:21:19.762 fused_ordering(577) 00:21:19.762 fused_ordering(578) 00:21:19.762 fused_ordering(579) 00:21:19.762 fused_ordering(580) 00:21:19.762 fused_ordering(581) 00:21:19.762 fused_ordering(582) 00:21:19.762 fused_ordering(583) 00:21:19.762 fused_ordering(584) 00:21:19.762 fused_ordering(585) 00:21:19.762 fused_ordering(586) 00:21:19.762 fused_ordering(587) 00:21:19.762 fused_ordering(588) 00:21:19.762 fused_ordering(589) 00:21:19.762 fused_ordering(590) 00:21:19.762 fused_ordering(591) 00:21:19.762 fused_ordering(592) 00:21:19.762 fused_ordering(593) 00:21:19.762 fused_ordering(594) 00:21:19.762 fused_ordering(595) 00:21:19.762 fused_ordering(596) 00:21:19.762 fused_ordering(597) 00:21:19.762 fused_ordering(598) 00:21:19.762 fused_ordering(599) 00:21:19.762 fused_ordering(600) 00:21:19.762 fused_ordering(601) 00:21:19.762 fused_ordering(602) 00:21:19.762 fused_ordering(603) 00:21:19.762 fused_ordering(604) 00:21:19.762 fused_ordering(605) 00:21:19.762 fused_ordering(606) 00:21:19.762 fused_ordering(607) 00:21:19.762 fused_ordering(608) 00:21:19.762 fused_ordering(609) 00:21:19.762 fused_ordering(610) 00:21:19.762 fused_ordering(611) 00:21:19.762 fused_ordering(612) 00:21:19.762 fused_ordering(613) 00:21:19.762 fused_ordering(614) 00:21:19.762 fused_ordering(615) 00:21:20.328 fused_ordering(616) 00:21:20.328 fused_ordering(617) 00:21:20.328 fused_ordering(618) 00:21:20.328 fused_ordering(619) 00:21:20.328 fused_ordering(620) 00:21:20.328 fused_ordering(621) 00:21:20.328 fused_ordering(622) 00:21:20.328 fused_ordering(623) 00:21:20.328 fused_ordering(624) 00:21:20.328 fused_ordering(625) 00:21:20.328 fused_ordering(626) 00:21:20.328 fused_ordering(627) 00:21:20.328 fused_ordering(628) 00:21:20.328 fused_ordering(629) 00:21:20.328 fused_ordering(630) 00:21:20.328 fused_ordering(631) 00:21:20.328 fused_ordering(632) 00:21:20.328 fused_ordering(633) 00:21:20.328 fused_ordering(634) 00:21:20.328 fused_ordering(635) 00:21:20.328 fused_ordering(636) 00:21:20.328 fused_ordering(637) 00:21:20.328 fused_ordering(638) 00:21:20.328 fused_ordering(639) 00:21:20.328 fused_ordering(640) 00:21:20.328 fused_ordering(641) 00:21:20.328 fused_ordering(642) 00:21:20.328 fused_ordering(643) 00:21:20.328 fused_ordering(644) 00:21:20.328 fused_ordering(645) 00:21:20.328 fused_ordering(646) 00:21:20.328 fused_ordering(647) 00:21:20.328 fused_ordering(648) 00:21:20.328 fused_ordering(649) 00:21:20.328 fused_ordering(650) 00:21:20.328 fused_ordering(651) 00:21:20.328 fused_ordering(652) 00:21:20.328 fused_ordering(653) 00:21:20.328 fused_ordering(654) 00:21:20.328 fused_ordering(655) 00:21:20.328 fused_ordering(656) 00:21:20.328 fused_ordering(657) 00:21:20.328 fused_ordering(658) 00:21:20.328 fused_ordering(659) 00:21:20.328 fused_ordering(660) 00:21:20.328 fused_ordering(661) 00:21:20.328 fused_ordering(662) 00:21:20.328 fused_ordering(663) 00:21:20.328 fused_ordering(664) 00:21:20.328 fused_ordering(665) 00:21:20.328 fused_ordering(666) 00:21:20.328 fused_ordering(667) 00:21:20.328 fused_ordering(668) 00:21:20.328 fused_ordering(669) 00:21:20.328 fused_ordering(670) 00:21:20.328 fused_ordering(671) 00:21:20.328 fused_ordering(672) 00:21:20.328 fused_ordering(673) 00:21:20.328 fused_ordering(674) 00:21:20.328 fused_ordering(675) 00:21:20.328 fused_ordering(676) 00:21:20.328 fused_ordering(677) 00:21:20.328 fused_ordering(678) 00:21:20.328 fused_ordering(679) 00:21:20.328 fused_ordering(680) 00:21:20.328 fused_ordering(681) 00:21:20.328 fused_ordering(682) 00:21:20.328 fused_ordering(683) 00:21:20.328 fused_ordering(684) 00:21:20.328 fused_ordering(685) 00:21:20.328 fused_ordering(686) 00:21:20.328 fused_ordering(687) 00:21:20.328 fused_ordering(688) 00:21:20.328 fused_ordering(689) 00:21:20.328 fused_ordering(690) 00:21:20.328 fused_ordering(691) 00:21:20.328 fused_ordering(692) 00:21:20.328 fused_ordering(693) 00:21:20.328 fused_ordering(694) 00:21:20.328 fused_ordering(695) 00:21:20.328 fused_ordering(696) 00:21:20.328 fused_ordering(697) 00:21:20.328 fused_ordering(698) 00:21:20.328 fused_ordering(699) 00:21:20.328 fused_ordering(700) 00:21:20.328 fused_ordering(701) 00:21:20.328 fused_ordering(702) 00:21:20.328 fused_ordering(703) 00:21:20.328 fused_ordering(704) 00:21:20.328 fused_ordering(705) 00:21:20.328 fused_ordering(706) 00:21:20.329 fused_ordering(707) 00:21:20.329 fused_ordering(708) 00:21:20.329 fused_ordering(709) 00:21:20.329 fused_ordering(710) 00:21:20.329 fused_ordering(711) 00:21:20.329 fused_ordering(712) 00:21:20.329 fused_ordering(713) 00:21:20.329 fused_ordering(714) 00:21:20.329 fused_ordering(715) 00:21:20.329 fused_ordering(716) 00:21:20.329 fused_ordering(717) 00:21:20.329 fused_ordering(718) 00:21:20.329 fused_ordering(719) 00:21:20.329 fused_ordering(720) 00:21:20.329 fused_ordering(721) 00:21:20.329 fused_ordering(722) 00:21:20.329 fused_ordering(723) 00:21:20.329 fused_ordering(724) 00:21:20.329 fused_ordering(725) 00:21:20.329 fused_ordering(726) 00:21:20.329 fused_ordering(727) 00:21:20.329 fused_ordering(728) 00:21:20.329 fused_ordering(729) 00:21:20.329 fused_ordering(730) 00:21:20.329 fused_ordering(731) 00:21:20.329 fused_ordering(732) 00:21:20.329 fused_ordering(733) 00:21:20.329 fused_ordering(734) 00:21:20.329 fused_ordering(735) 00:21:20.329 fused_ordering(736) 00:21:20.329 fused_ordering(737) 00:21:20.329 fused_ordering(738) 00:21:20.329 fused_ordering(739) 00:21:20.329 fused_ordering(740) 00:21:20.329 fused_ordering(741) 00:21:20.329 fused_ordering(742) 00:21:20.329 fused_ordering(743) 00:21:20.329 fused_ordering(744) 00:21:20.329 fused_ordering(745) 00:21:20.329 fused_ordering(746) 00:21:20.329 fused_ordering(747) 00:21:20.329 fused_ordering(748) 00:21:20.329 fused_ordering(749) 00:21:20.329 fused_ordering(750) 00:21:20.329 fused_ordering(751) 00:21:20.329 fused_ordering(752) 00:21:20.329 fused_ordering(753) 00:21:20.329 fused_ordering(754) 00:21:20.329 fused_ordering(755) 00:21:20.329 fused_ordering(756) 00:21:20.329 fused_ordering(757) 00:21:20.329 fused_ordering(758) 00:21:20.329 fused_ordering(759) 00:21:20.329 fused_ordering(760) 00:21:20.329 fused_ordering(761) 00:21:20.329 fused_ordering(762) 00:21:20.329 fused_ordering(763) 00:21:20.329 fused_ordering(764) 00:21:20.329 fused_ordering(765) 00:21:20.329 fused_ordering(766) 00:21:20.329 fused_ordering(767) 00:21:20.329 fused_ordering(768) 00:21:20.329 fused_ordering(769) 00:21:20.329 fused_ordering(770) 00:21:20.329 fused_ordering(771) 00:21:20.329 fused_ordering(772) 00:21:20.329 fused_ordering(773) 00:21:20.329 fused_ordering(774) 00:21:20.329 fused_ordering(775) 00:21:20.329 fused_ordering(776) 00:21:20.329 fused_ordering(777) 00:21:20.329 fused_ordering(778) 00:21:20.329 fused_ordering(779) 00:21:20.329 fused_ordering(780) 00:21:20.329 fused_ordering(781) 00:21:20.329 fused_ordering(782) 00:21:20.329 fused_ordering(783) 00:21:20.329 fused_ordering(784) 00:21:20.329 fused_ordering(785) 00:21:20.329 fused_ordering(786) 00:21:20.329 fused_ordering(787) 00:21:20.329 fused_ordering(788) 00:21:20.329 fused_ordering(789) 00:21:20.329 fused_ordering(790) 00:21:20.329 fused_ordering(791) 00:21:20.329 fused_ordering(792) 00:21:20.329 fused_ordering(793) 00:21:20.329 fused_ordering(794) 00:21:20.329 fused_ordering(795) 00:21:20.329 fused_ordering(796) 00:21:20.329 fused_ordering(797) 00:21:20.329 fused_ordering(798) 00:21:20.329 fused_ordering(799) 00:21:20.329 fused_ordering(800) 00:21:20.329 fused_ordering(801) 00:21:20.329 fused_ordering(802) 00:21:20.329 fused_ordering(803) 00:21:20.329 fused_ordering(804) 00:21:20.329 fused_ordering(805) 00:21:20.329 fused_ordering(806) 00:21:20.329 fused_ordering(807) 00:21:20.329 fused_ordering(808) 00:21:20.329 fused_ordering(809) 00:21:20.329 fused_ordering(810) 00:21:20.329 fused_ordering(811) 00:21:20.329 fused_ordering(812) 00:21:20.329 fused_ordering(813) 00:21:20.329 fused_ordering(814) 00:21:20.329 fused_ordering(815) 00:21:20.329 fused_ordering(816) 00:21:20.329 fused_ordering(817) 00:21:20.329 fused_ordering(818) 00:21:20.329 fused_ordering(819) 00:21:20.329 fused_ordering(820) 00:21:20.587 fused_ordering(821) 00:21:20.587 fused_ordering(822) 00:21:20.587 fused_ordering(823) 00:21:20.587 fused_ordering(824) 00:21:20.587 fused_ordering(825) 00:21:20.587 fused_ordering(826) 00:21:20.587 fused_ordering(827) 00:21:20.587 fused_ordering(828) 00:21:20.587 fused_ordering(829) 00:21:20.587 fused_ordering(830) 00:21:20.587 fused_ordering(831) 00:21:20.587 fused_ordering(832) 00:21:20.587 fused_ordering(833) 00:21:20.587 fused_ordering(834) 00:21:20.587 fused_ordering(835) 00:21:20.587 fused_ordering(836) 00:21:20.587 fused_ordering(837) 00:21:20.587 fused_ordering(838) 00:21:20.587 fused_ordering(839) 00:21:20.587 fused_ordering(840) 00:21:20.587 fused_ordering(841) 00:21:20.587 fused_ordering(842) 00:21:20.587 fused_ordering(843) 00:21:20.587 fused_ordering(844) 00:21:20.587 fused_ordering(845) 00:21:20.587 fused_ordering(846) 00:21:20.587 fused_ordering(847) 00:21:20.587 fused_ordering(848) 00:21:20.587 fused_ordering(849) 00:21:20.587 fused_ordering(850) 00:21:20.587 fused_ordering(851) 00:21:20.587 fused_ordering(852) 00:21:20.587 fused_ordering(853) 00:21:20.587 fused_ordering(854) 00:21:20.587 fused_ordering(855) 00:21:20.587 fused_ordering(856) 00:21:20.587 fused_ordering(857) 00:21:20.587 fused_ordering(858) 00:21:20.587 fused_ordering(859) 00:21:20.587 fused_ordering(860) 00:21:20.587 fused_ordering(861) 00:21:20.587 fused_ordering(862) 00:21:20.587 fused_ordering(863) 00:21:20.587 fused_ordering(864) 00:21:20.587 fused_ordering(865) 00:21:20.587 fused_ordering(866) 00:21:20.587 fused_ordering(867) 00:21:20.587 fused_ordering(868) 00:21:20.587 fused_ordering(869) 00:21:20.587 fused_ordering(870) 00:21:20.587 fused_ordering(871) 00:21:20.587 fused_ordering(872) 00:21:20.587 fused_ordering(873) 00:21:20.587 fused_ordering(874) 00:21:20.587 fused_ordering(875) 00:21:20.587 fused_ordering(876) 00:21:20.587 fused_ordering(877) 00:21:20.587 fused_ordering(878) 00:21:20.587 fused_ordering(879) 00:21:20.587 fused_ordering(880) 00:21:20.587 fused_ordering(881) 00:21:20.587 fused_ordering(882) 00:21:20.587 fused_ordering(883) 00:21:20.587 fused_ordering(884) 00:21:20.587 fused_ordering(885) 00:21:20.587 fused_ordering(886) 00:21:20.587 fused_ordering(887) 00:21:20.587 fused_ordering(888) 00:21:20.587 fused_ordering(889) 00:21:20.587 fused_ordering(890) 00:21:20.587 fused_ordering(891) 00:21:20.587 fused_ordering(892) 00:21:20.587 fused_ordering(893) 00:21:20.587 fused_ordering(894) 00:21:20.587 fused_ordering(895) 00:21:20.587 fused_ordering(896) 00:21:20.587 fused_ordering(897) 00:21:20.587 fused_ordering(898) 00:21:20.587 fused_ordering(899) 00:21:20.587 fused_ordering(900) 00:21:20.587 fused_ordering(901) 00:21:20.587 fused_ordering(902) 00:21:20.587 fused_ordering(903) 00:21:20.587 fused_ordering(904) 00:21:20.587 fused_ordering(905) 00:21:20.587 fused_ordering(906) 00:21:20.587 fused_ordering(907) 00:21:20.587 fused_ordering(908) 00:21:20.587 fused_ordering(909) 00:21:20.587 fused_ordering(910) 00:21:20.587 fused_ordering(911) 00:21:20.587 fused_ordering(912) 00:21:20.587 fused_ordering(913) 00:21:20.587 fused_ordering(914) 00:21:20.587 fused_ordering(915) 00:21:20.587 fused_ordering(916) 00:21:20.587 fused_ordering(917) 00:21:20.587 fused_ordering(918) 00:21:20.587 fused_ordering(919) 00:21:20.587 fused_ordering(920) 00:21:20.587 fused_ordering(921) 00:21:20.587 fused_ordering(922) 00:21:20.587 fused_ordering(923) 00:21:20.587 fused_ordering(924) 00:21:20.587 fused_ordering(925) 00:21:20.587 fused_ordering(926) 00:21:20.587 fused_ordering(927) 00:21:20.587 fused_ordering(928) 00:21:20.587 fused_ordering(929) 00:21:20.587 fused_ordering(930) 00:21:20.587 fused_ordering(931) 00:21:20.587 fused_ordering(932) 00:21:20.587 fused_ordering(933) 00:21:20.587 fused_ordering(934) 00:21:20.587 fused_ordering(935) 00:21:20.587 fused_ordering(936) 00:21:20.587 fused_ordering(937) 00:21:20.587 fused_ordering(938) 00:21:20.587 fused_ordering(939) 00:21:20.587 fused_ordering(940) 00:21:20.587 fused_ordering(941) 00:21:20.587 fused_ordering(942) 00:21:20.587 fused_ordering(943) 00:21:20.587 fused_ordering(944) 00:21:20.587 fused_ordering(945) 00:21:20.587 fused_ordering(946) 00:21:20.587 fused_ordering(947) 00:21:20.587 fused_ordering(948) 00:21:20.587 fused_ordering(949) 00:21:20.587 fused_ordering(950) 00:21:20.587 fused_ordering(951) 00:21:20.587 fused_ordering(952) 00:21:20.587 fused_ordering(953) 00:21:20.587 fused_ordering(954) 00:21:20.587 fused_ordering(955) 00:21:20.587 fused_ordering(956) 00:21:20.587 fused_ordering(957) 00:21:20.587 fused_ordering(958) 00:21:20.587 fused_ordering(959) 00:21:20.587 fused_ordering(960) 00:21:20.587 fused_ordering(961) 00:21:20.587 fused_ordering(962) 00:21:20.587 fused_ordering(963) 00:21:20.587 fused_ordering(964) 00:21:20.587 fused_ordering(965) 00:21:20.587 fused_ordering(966) 00:21:20.587 fused_ordering(967) 00:21:20.587 fused_ordering(968) 00:21:20.587 fused_ordering(969) 00:21:20.587 fused_ordering(970) 00:21:20.587 fused_ordering(971) 00:21:20.587 fused_ordering(972) 00:21:20.587 fused_ordering(973) 00:21:20.587 fused_ordering(974) 00:21:20.587 fused_ordering(975) 00:21:20.587 fused_ordering(976) 00:21:20.587 fused_ordering(977) 00:21:20.587 fused_ordering(978) 00:21:20.587 fused_ordering(979) 00:21:20.587 fused_ordering(980) 00:21:20.587 fused_ordering(981) 00:21:20.587 fused_ordering(982) 00:21:20.587 fused_ordering(983) 00:21:20.587 fused_ordering(984) 00:21:20.587 fused_ordering(985) 00:21:20.587 fused_ordering(986) 00:21:20.587 fused_ordering(987) 00:21:20.587 fused_ordering(988) 00:21:20.587 fused_ordering(989) 00:21:20.587 fused_ordering(990) 00:21:20.587 fused_ordering(991) 00:21:20.587 fused_ordering(992) 00:21:20.587 fused_ordering(993) 00:21:20.587 fused_ordering(994) 00:21:20.587 fused_ordering(995) 00:21:20.587 fused_ordering(996) 00:21:20.587 fused_ordering(997) 00:21:20.587 fused_ordering(998) 00:21:20.587 fused_ordering(999) 00:21:20.587 fused_ordering(1000) 00:21:20.587 fused_ordering(1001) 00:21:20.587 fused_ordering(1002) 00:21:20.587 fused_ordering(1003) 00:21:20.587 fused_ordering(1004) 00:21:20.587 fused_ordering(1005) 00:21:20.587 fused_ordering(1006) 00:21:20.587 fused_ordering(1007) 00:21:20.587 fused_ordering(1008) 00:21:20.587 fused_ordering(1009) 00:21:20.587 fused_ordering(1010) 00:21:20.587 fused_ordering(1011) 00:21:20.587 fused_ordering(1012) 00:21:20.587 fused_ordering(1013) 00:21:20.587 fused_ordering(1014) 00:21:20.587 fused_ordering(1015) 00:21:20.587 fused_ordering(1016) 00:21:20.587 fused_ordering(1017) 00:21:20.588 fused_ordering(1018) 00:21:20.588 fused_ordering(1019) 00:21:20.588 fused_ordering(1020) 00:21:20.588 fused_ordering(1021) 00:21:20.588 fused_ordering(1022) 00:21:20.588 fused_ordering(1023) 00:21:20.588 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:21:20.588 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:21:20.588 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:20.588 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:21:20.588 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:20.588 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:21:20.588 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:20.588 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:20.588 rmmod nvme_tcp 00:21:20.588 rmmod nvme_fabrics 00:21:20.588 rmmod nvme_keyring 00:21:20.846 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:20.846 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:21:20.846 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:21:20.846 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 74766 ']' 00:21:20.846 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 74766 00:21:20.846 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 74766 ']' 00:21:20.846 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 74766 00:21:20.846 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:21:20.846 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:20.846 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74766 00:21:20.846 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:20.846 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:20.846 killing process with pid 74766 00:21:20.846 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74766' 00:21:20.846 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 74766 00:21:20.846 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 74766 00:21:20.846 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:20.846 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:20.846 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:20.846 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:21:20.846 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:21:20.846 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:21:20.846 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:20.846 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:20.846 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:20.846 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:20.846 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:20.846 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:20.846 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:20.846 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:20.846 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:20.846 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:20.846 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:20.846 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:21.104 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:21.104 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:21.104 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:21.104 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:21.104 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:21.104 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.104 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:21.104 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.104 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@300 -- # return 0 00:21:21.104 00:21:21.104 real 0m3.598s 00:21:21.104 user 0m4.247s 00:21:21.104 sys 0m0.955s 00:21:21.104 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:21.104 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:21:21.104 ************************************ 00:21:21.104 END TEST nvmf_fused_ordering 00:21:21.104 ************************************ 00:21:21.104 12:20:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:21:21.104 12:20:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:21.104 12:20:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:21.104 12:20:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:21.104 ************************************ 00:21:21.104 START TEST nvmf_ns_masking 00:21:21.104 ************************************ 00:21:21.104 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:21:21.104 * Looking for test storage... 00:21:21.104 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:21.104 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:21.104 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:21:21.104 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:21.362 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:21.362 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:21.362 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:21.362 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:21.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.363 --rc genhtml_branch_coverage=1 00:21:21.363 --rc genhtml_function_coverage=1 00:21:21.363 --rc genhtml_legend=1 00:21:21.363 --rc geninfo_all_blocks=1 00:21:21.363 --rc geninfo_unexecuted_blocks=1 00:21:21.363 00:21:21.363 ' 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:21.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.363 --rc genhtml_branch_coverage=1 00:21:21.363 --rc genhtml_function_coverage=1 00:21:21.363 --rc genhtml_legend=1 00:21:21.363 --rc geninfo_all_blocks=1 00:21:21.363 --rc geninfo_unexecuted_blocks=1 00:21:21.363 00:21:21.363 ' 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:21.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.363 --rc genhtml_branch_coverage=1 00:21:21.363 --rc genhtml_function_coverage=1 00:21:21.363 --rc genhtml_legend=1 00:21:21.363 --rc geninfo_all_blocks=1 00:21:21.363 --rc geninfo_unexecuted_blocks=1 00:21:21.363 00:21:21.363 ' 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:21.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:21.363 --rc genhtml_branch_coverage=1 00:21:21.363 --rc genhtml_function_coverage=1 00:21:21.363 --rc genhtml_legend=1 00:21:21.363 --rc geninfo_all_blocks=1 00:21:21.363 --rc geninfo_unexecuted_blocks=1 00:21:21.363 00:21:21.363 ' 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:21.363 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:21.364 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=b51fa26c-732c-43c0-b1d7-1040f16c63d8 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=c1bcaec9-0683-408f-86b8-6cde303f3e38 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=d218b32e-7e9c-4c58-84af-ac6f3dd59cd0 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:21.364 Cannot find device "nvmf_init_br" 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # true 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:21.364 Cannot find device "nvmf_init_br2" 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # true 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:21.364 Cannot find device "nvmf_tgt_br" 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@164 -- # true 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:21.364 Cannot find device "nvmf_tgt_br2" 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@165 -- # true 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:21.364 Cannot find device "nvmf_init_br" 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@166 -- # true 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:21.364 Cannot find device "nvmf_init_br2" 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@167 -- # true 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:21.364 Cannot find device "nvmf_tgt_br" 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@168 -- # true 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:21.364 Cannot find device "nvmf_tgt_br2" 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # true 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:21.364 Cannot find device "nvmf_br" 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@170 -- # true 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:21.364 Cannot find device "nvmf_init_if" 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # true 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:21.364 Cannot find device "nvmf_init_if2" 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # true 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:21.364 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@173 -- # true 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:21.364 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@174 -- # true 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:21.364 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:21.623 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:21.623 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:21.623 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:21.623 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:21.623 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:21.623 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:21.623 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:21.623 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:21.623 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:21.623 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:21.623 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:21.623 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:21.623 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:21.623 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:21.623 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:21.623 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:21.623 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:21.623 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:21.623 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:21.623 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:21.623 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:21.623 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:21.623 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:21.623 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:21.623 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:21.623 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:21.623 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:21.623 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:21:21.623 00:21:21.623 --- 10.0.0.3 ping statistics --- 00:21:21.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:21.623 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:21:21.623 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:21.623 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:21.623 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:21:21.623 00:21:21.623 --- 10.0.0.4 ping statistics --- 00:21:21.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:21.623 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:21:21.623 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:21.623 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:21.623 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:21:21.623 00:21:21.623 --- 10.0.0.1 ping statistics --- 00:21:21.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:21.623 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:21:21.623 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:21.623 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:21.623 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:21:21.623 00:21:21.623 --- 10.0.0.2 ping statistics --- 00:21:21.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:21.623 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:21:21.623 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:21.623 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@461 -- # return 0 00:21:21.623 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:21.623 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:21.623 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:21.623 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:21.623 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:21.623 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:21.623 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:21.624 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:21:21.624 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:21.624 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:21.624 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:21:21.624 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=75056 00:21:21.624 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 75056 00:21:21.624 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 75056 ']' 00:21:21.624 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:21.624 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:21.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:21.624 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:21.624 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:21.624 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:21:21.624 12:20:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:21.624 [2024-11-25 12:20:17.423830] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:21:21.624 [2024-11-25 12:20:17.423886] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:21.881 [2024-11-25 12:20:17.560248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.881 [2024-11-25 12:20:17.594541] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:21.881 [2024-11-25 12:20:17.594581] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:21.881 [2024-11-25 12:20:17.594587] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:21.881 [2024-11-25 12:20:17.594592] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:21.881 [2024-11-25 12:20:17.594597] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:21.881 [2024-11-25 12:20:17.594870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:22.447 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:22.447 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:21:22.447 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:22.447 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:22.447 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:21:22.705 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:22.705 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:22.705 [2024-11-25 12:20:18.519005] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:22.705 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:21:22.705 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:21:22.705 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:22.963 Malloc1 00:21:22.963 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:21:23.221 Malloc2 00:21:23.221 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:21:23.478 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:21:23.735 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:23.735 [2024-11-25 12:20:19.559347] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:23.735 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:21:23.735 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d218b32e-7e9c-4c58-84af-ac6f3dd59cd0 -a 10.0.0.3 -s 4420 -i 4 00:21:23.993 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:21:23.993 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:21:23.993 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:21:23.993 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:21:23.993 12:20:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:21:25.901 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:21:25.901 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:21:25.901 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:21:25.901 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:21:25.901 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:21:25.901 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:21:25.901 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:21:25.901 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:21:25.901 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:21:25.901 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:21:25.901 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:21:25.901 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:25.901 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:21:25.901 [ 0]:0x1 00:21:25.901 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:25.901 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:25.901 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=476c53df9bee4a929d38db6be27c8895 00:21:25.901 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 476c53df9bee4a929d38db6be27c8895 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:25.901 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:21:26.197 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:21:26.197 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:26.197 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:21:26.197 [ 0]:0x1 00:21:26.197 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:26.197 12:20:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:26.197 12:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=476c53df9bee4a929d38db6be27c8895 00:21:26.197 12:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 476c53df9bee4a929d38db6be27c8895 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:26.197 12:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:21:26.197 12:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:26.197 12:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:21:26.197 [ 1]:0x2 00:21:26.197 12:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:26.197 12:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:26.197 12:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=91a533c0b5f8472d8e00ee0a8996c387 00:21:26.197 12:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 91a533c0b5f8472d8e00ee0a8996c387 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:26.197 12:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:21:26.197 12:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:26.453 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:26.453 12:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:26.710 12:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:21:26.710 12:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:21:26.710 12:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d218b32e-7e9c-4c58-84af-ac6f3dd59cd0 -a 10.0.0.3 -s 4420 -i 4 00:21:26.972 12:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:21:26.972 12:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:21:26.972 12:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:21:26.972 12:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:21:26.972 12:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:21:26.972 12:20:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:21:29.009 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:21:29.009 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:21:29.009 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:21:29.009 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:21:29.009 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:21:29.009 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:21:29.009 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:21:29.009 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:21:29.009 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:21:29.009 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:21:29.009 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:21:29.009 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:21:29.009 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:21:29.009 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:21:29.009 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:29.009 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:21:29.009 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:29.009 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:21:29.009 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:29.009 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:21:29.009 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:29.009 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:29.009 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:21:29.009 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:29.009 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:21:29.009 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:29.009 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:29.009 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:29.009 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:21:29.009 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:21:29.009 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:29.009 [ 0]:0x2 00:21:29.009 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:29.009 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:29.009 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=91a533c0b5f8472d8e00ee0a8996c387 00:21:29.009 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 91a533c0b5f8472d8e00ee0a8996c387 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:29.009 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:21:29.267 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:21:29.267 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:29.267 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:21:29.267 [ 0]:0x1 00:21:29.267 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:29.267 12:20:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:29.267 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=476c53df9bee4a929d38db6be27c8895 00:21:29.267 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 476c53df9bee4a929d38db6be27c8895 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:29.267 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:21:29.267 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:29.267 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:21:29.267 [ 1]:0x2 00:21:29.267 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:29.267 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:29.267 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=91a533c0b5f8472d8e00ee0a8996c387 00:21:29.267 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 91a533c0b5f8472d8e00ee0a8996c387 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:29.267 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:21:29.596 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:21:29.596 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:21:29.596 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:21:29.596 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:21:29.596 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:29.596 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:21:29.596 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:29.596 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:21:29.596 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:29.596 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:21:29.596 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:29.596 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:29.596 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:21:29.596 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:29.596 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:21:29.596 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:29.596 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:29.596 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:29.596 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:21:29.597 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:21:29.597 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:29.597 [ 0]:0x2 00:21:29.597 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:29.597 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:29.597 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=91a533c0b5f8472d8e00ee0a8996c387 00:21:29.597 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 91a533c0b5f8472d8e00ee0a8996c387 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:29.597 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:21:29.597 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:29.597 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:29.597 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:21:29.953 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:21:29.953 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I d218b32e-7e9c-4c58-84af-ac6f3dd59cd0 -a 10.0.0.3 -s 4420 -i 4 00:21:29.953 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:21:29.953 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:21:29.953 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:21:29.953 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:21:29.953 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:21:29.953 12:20:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:21:31.867 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:21:31.867 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:21:31.867 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:21:31.867 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:21:31.867 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:21:31.867 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:21:31.867 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:21:31.867 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:21:32.127 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:21:32.127 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:21:32.127 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:21:32.127 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:32.128 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:21:32.128 [ 0]:0x1 00:21:32.128 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:32.128 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:32.128 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=476c53df9bee4a929d38db6be27c8895 00:21:32.128 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 476c53df9bee4a929d38db6be27c8895 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:32.128 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:21:32.128 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:32.128 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:21:32.128 [ 1]:0x2 00:21:32.128 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:32.128 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:32.128 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=91a533c0b5f8472d8e00ee0a8996c387 00:21:32.128 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 91a533c0b5f8472d8e00ee0a8996c387 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:32.128 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:21:32.389 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:21:32.389 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:21:32.389 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:21:32.389 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:21:32.389 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:32.389 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:21:32.389 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:32.389 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:21:32.389 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:21:32.389 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:32.389 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:32.389 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:32.389 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:21:32.389 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:32.389 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:21:32.389 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:32.389 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:32.389 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:32.389 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:21:32.389 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:32.389 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:21:32.389 [ 0]:0x2 00:21:32.389 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:32.389 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:32.389 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=91a533c0b5f8472d8e00ee0a8996c387 00:21:32.389 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 91a533c0b5f8472d8e00ee0a8996c387 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:32.389 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:21:32.389 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:21:32.389 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:21:32.389 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:32.389 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:32.390 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:32.390 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:32.390 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:32.390 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:32.390 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:32.390 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:21:32.390 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:21:32.650 [2024-11-25 12:20:28.319651] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:21:32.650 2024/11/25 12:20:28 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:21:32.650 request: 00:21:32.650 { 00:21:32.650 "method": "nvmf_ns_remove_host", 00:21:32.650 "params": { 00:21:32.650 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:32.650 "nsid": 2, 00:21:32.650 "host": "nqn.2016-06.io.spdk:host1" 00:21:32.650 } 00:21:32.650 } 00:21:32.650 Got JSON-RPC error response 00:21:32.650 GoRPCClient: error on JSON-RPC call 00:21:32.650 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:21:32.650 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:32.650 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:32.650 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:32.650 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:21:32.650 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:21:32.650 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:21:32.650 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:21:32.650 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:32.650 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:21:32.650 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:32.650 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:21:32.650 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:32.650 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:21:32.650 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:32.650 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:32.650 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:21:32.650 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:32.651 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:21:32.651 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:32.651 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:32.651 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:32.651 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:21:32.651 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:21:32.651 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:21:32.651 [ 0]:0x2 00:21:32.651 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:21:32.651 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:32.651 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=91a533c0b5f8472d8e00ee0a8996c387 00:21:32.651 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 91a533c0b5f8472d8e00ee0a8996c387 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:32.651 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:21:32.651 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:32.651 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:32.651 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=75422 00:21:32.651 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:21:32.651 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 75422 /var/tmp/host.sock 00:21:32.651 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:21:32.651 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 75422 ']' 00:21:32.651 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:21:32.651 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:32.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:21:32.651 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:21:32.651 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:32.651 12:20:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:21:32.651 [2024-11-25 12:20:28.502893] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:21:32.651 [2024-11-25 12:20:28.502959] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75422 ] 00:21:32.911 [2024-11-25 12:20:28.640372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.911 [2024-11-25 12:20:28.675732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:33.852 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:33.852 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:21:33.852 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:33.852 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:21:34.110 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid b51fa26c-732c-43c0-b1d7-1040f16c63d8 00:21:34.110 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:21:34.110 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g B51FA26C732C43C0B1D71040F16C63D8 -i 00:21:34.369 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid c1bcaec9-0683-408f-86b8-6cde303f3e38 00:21:34.369 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:21:34.369 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g C1BCAEC90683408F86B86CDE303F3E38 -i 00:21:34.369 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:21:34.627 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:21:34.886 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:21:34.886 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:21:35.147 nvme0n1 00:21:35.147 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:21:35.147 12:20:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:21:35.408 nvme1n2 00:21:35.408 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:21:35.408 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:21:35.408 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:21:35.408 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:21:35.408 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:21:35.668 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:21:35.668 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:21:35.668 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:21:35.668 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:21:35.928 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ b51fa26c-732c-43c0-b1d7-1040f16c63d8 == \b\5\1\f\a\2\6\c\-\7\3\2\c\-\4\3\c\0\-\b\1\d\7\-\1\0\4\0\f\1\6\c\6\3\d\8 ]] 00:21:35.928 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:21:35.928 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:21:35.928 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:21:36.190 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ c1bcaec9-0683-408f-86b8-6cde303f3e38 == \c\1\b\c\a\e\c\9\-\0\6\8\3\-\4\0\8\f\-\8\6\b\8\-\6\c\d\e\3\0\3\f\3\e\3\8 ]] 00:21:36.190 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:36.190 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:21:36.450 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid b51fa26c-732c-43c0-b1d7-1040f16c63d8 00:21:36.450 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:21:36.450 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g B51FA26C732C43C0B1D71040F16C63D8 00:21:36.450 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:21:36.450 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g B51FA26C732C43C0B1D71040F16C63D8 00:21:36.450 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:36.450 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:36.450 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:36.450 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:36.450 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:36.450 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:36.450 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:36.450 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:21:36.450 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g B51FA26C732C43C0B1D71040F16C63D8 00:21:36.711 [2024-11-25 12:20:32.412920] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:21:36.711 [2024-11-25 12:20:32.412955] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:21:36.711 [2024-11-25 12:20:32.412962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:21:36.711 2024/11/25 12:20:32 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:invalid nguid:B51FA26C732C43C0B1D71040F16C63D8 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:21:36.711 request: 00:21:36.711 { 00:21:36.711 "method": "nvmf_subsystem_add_ns", 00:21:36.711 "params": { 00:21:36.711 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:36.711 "namespace": { 00:21:36.711 "bdev_name": "invalid", 00:21:36.711 "nsid": 1, 00:21:36.711 "nguid": "B51FA26C732C43C0B1D71040F16C63D8", 00:21:36.711 "no_auto_visible": false 00:21:36.711 } 00:21:36.711 } 00:21:36.711 } 00:21:36.711 Got JSON-RPC error response 00:21:36.711 GoRPCClient: error on JSON-RPC call 00:21:36.711 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:21:36.711 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:36.711 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:36.711 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:36.711 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid b51fa26c-732c-43c0-b1d7-1040f16c63d8 00:21:36.711 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:21:36.711 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g B51FA26C732C43C0B1D71040F16C63D8 -i 00:21:37.040 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:21:38.966 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:21:38.966 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:21:38.966 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:21:38.966 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:21:38.966 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 75422 00:21:38.966 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 75422 ']' 00:21:38.966 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 75422 00:21:38.966 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:21:38.966 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:38.966 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75422 00:21:38.966 killing process with pid 75422 00:21:38.966 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:38.966 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:38.966 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75422' 00:21:38.966 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 75422 00:21:38.966 12:20:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 75422 00:21:39.225 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:39.485 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:21:39.485 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:21:39.485 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:39.485 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:21:39.485 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:39.485 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:21:39.485 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:39.485 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:39.485 rmmod nvme_tcp 00:21:39.485 rmmod nvme_fabrics 00:21:39.485 rmmod nvme_keyring 00:21:39.485 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:39.485 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:21:39.485 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:21:39.485 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 75056 ']' 00:21:39.485 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 75056 00:21:39.485 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 75056 ']' 00:21:39.485 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 75056 00:21:39.485 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:21:39.485 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:39.485 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75056 00:21:39.743 killing process with pid 75056 00:21:39.743 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:39.743 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:39.743 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75056' 00:21:39.743 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 75056 00:21:39.743 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 75056 00:21:39.743 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:39.743 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:39.743 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:39.743 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:21:39.743 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:21:39.743 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:21:39.743 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:39.743 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:39.743 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:39.743 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:39.743 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:39.743 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:39.743 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:39.743 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:39.743 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:39.743 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:39.743 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:39.743 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:39.743 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:40.004 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:40.004 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:40.004 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:40.004 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:40.004 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.004 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:40.004 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.004 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@300 -- # return 0 00:21:40.004 00:21:40.004 real 0m18.838s 00:21:40.004 user 0m30.756s 00:21:40.004 sys 0m2.310s 00:21:40.004 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:40.004 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:21:40.004 ************************************ 00:21:40.004 END TEST nvmf_ns_masking 00:21:40.004 ************************************ 00:21:40.004 12:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 0 -eq 1 ]] 00:21:40.004 12:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:21:40.004 12:20:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:21:40.004 12:20:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:40.004 12:20:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:40.004 12:20:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:40.004 ************************************ 00:21:40.004 START TEST nvmf_auth_target 00:21:40.004 ************************************ 00:21:40.004 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:21:40.004 * Looking for test storage... 00:21:40.004 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:40.004 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:40.004 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:21:40.004 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:40.263 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:40.263 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:40.263 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:40.263 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:40.263 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:21:40.263 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:21:40.263 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:21:40.263 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:21:40.263 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:21:40.263 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:21:40.263 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:21:40.263 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:40.263 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:21:40.263 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:21:40.263 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:40.263 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:40.263 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:21:40.263 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:21:40.263 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:40.263 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:21:40.263 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:21:40.263 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:21:40.263 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:21:40.263 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:40.263 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:21:40.263 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:21:40.263 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:40.263 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:40.263 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:21:40.263 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:40.263 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:40.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.263 --rc genhtml_branch_coverage=1 00:21:40.263 --rc genhtml_function_coverage=1 00:21:40.263 --rc genhtml_legend=1 00:21:40.263 --rc geninfo_all_blocks=1 00:21:40.263 --rc geninfo_unexecuted_blocks=1 00:21:40.263 00:21:40.263 ' 00:21:40.263 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:40.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.263 --rc genhtml_branch_coverage=1 00:21:40.263 --rc genhtml_function_coverage=1 00:21:40.263 --rc genhtml_legend=1 00:21:40.263 --rc geninfo_all_blocks=1 00:21:40.263 --rc geninfo_unexecuted_blocks=1 00:21:40.263 00:21:40.263 ' 00:21:40.263 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:40.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.263 --rc genhtml_branch_coverage=1 00:21:40.263 --rc genhtml_function_coverage=1 00:21:40.263 --rc genhtml_legend=1 00:21:40.263 --rc geninfo_all_blocks=1 00:21:40.264 --rc geninfo_unexecuted_blocks=1 00:21:40.264 00:21:40.264 ' 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:40.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:40.264 --rc genhtml_branch_coverage=1 00:21:40.264 --rc genhtml_function_coverage=1 00:21:40.264 --rc genhtml_legend=1 00:21:40.264 --rc geninfo_all_blocks=1 00:21:40.264 --rc geninfo_unexecuted_blocks=1 00:21:40.264 00:21:40.264 ' 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:40.264 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:40.264 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:40.264 Cannot find device "nvmf_init_br" 00:21:40.265 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:21:40.265 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:40.265 Cannot find device "nvmf_init_br2" 00:21:40.265 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:21:40.265 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:40.265 Cannot find device "nvmf_tgt_br" 00:21:40.265 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:21:40.265 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:40.265 Cannot find device "nvmf_tgt_br2" 00:21:40.265 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:21:40.265 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:40.265 Cannot find device "nvmf_init_br" 00:21:40.265 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:21:40.265 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:40.265 Cannot find device "nvmf_init_br2" 00:21:40.265 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:21:40.265 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:40.265 Cannot find device "nvmf_tgt_br" 00:21:40.265 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:21:40.265 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:40.265 Cannot find device "nvmf_tgt_br2" 00:21:40.265 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:21:40.265 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:40.265 Cannot find device "nvmf_br" 00:21:40.265 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:21:40.265 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:40.265 Cannot find device "nvmf_init_if" 00:21:40.265 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:21:40.265 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:40.265 Cannot find device "nvmf_init_if2" 00:21:40.265 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:21:40.265 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:40.265 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:40.265 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:21:40.265 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:40.265 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:40.265 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:21:40.265 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:40.265 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:40.265 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:40.265 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:40.265 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:40.265 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:40.265 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:40.265 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:40.265 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:40.265 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:40.265 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:40.265 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:40.265 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:40.525 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:40.525 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:40.525 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:40.525 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:40.525 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:40.525 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:40.525 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:40.525 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:40.525 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:40.525 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:40.525 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:40.525 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:40.525 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:40.525 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:40.525 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:40.525 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:40.525 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:40.525 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:40.525 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:40.525 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:40.525 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:40.525 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:21:40.525 00:21:40.525 --- 10.0.0.3 ping statistics --- 00:21:40.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.525 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:21:40.525 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:40.525 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:40.525 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.036 ms 00:21:40.525 00:21:40.525 --- 10.0.0.4 ping statistics --- 00:21:40.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.525 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:21:40.525 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:40.525 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:40.525 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:21:40.525 00:21:40.525 --- 10.0.0.1 ping statistics --- 00:21:40.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.525 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:21:40.525 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:40.525 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:40.525 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:21:40.525 00:21:40.525 --- 10.0.0.2 ping statistics --- 00:21:40.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.525 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:21:40.525 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:40.525 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:21:40.525 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:40.525 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:40.525 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:40.525 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:40.525 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:40.525 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:40.525 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:40.525 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:21:40.525 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:40.525 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:40.525 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.525 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=75900 00:21:40.525 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 75900 00:21:40.525 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 75900 ']' 00:21:40.525 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.525 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:21:40.525 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:40.525 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.525 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:40.525 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.461 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:41.461 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:41.461 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:41.461 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:41.461 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.461 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:41.461 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=75940 00:21:41.461 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:41.461 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:21:41.461 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:21:41.461 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:41.461 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:41.461 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:41.461 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:21:41.461 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:21:41.461 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:41.461 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ab55f9b7ac6d30febed4b2105947c0bbbc5db2037d64c083 00:21:41.461 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:21:41.461 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.cem 00:21:41.461 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ab55f9b7ac6d30febed4b2105947c0bbbc5db2037d64c083 0 00:21:41.461 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ab55f9b7ac6d30febed4b2105947c0bbbc5db2037d64c083 0 00:21:41.461 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:41.461 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:41.461 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ab55f9b7ac6d30febed4b2105947c0bbbc5db2037d64c083 00:21:41.461 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:21:41.461 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:41.461 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.cem 00:21:41.461 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.cem 00:21:41.461 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.cem 00:21:41.461 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:21:41.461 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:41.461 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:41.461 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:41.461 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:21:41.461 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:21:41.461 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:41.461 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0a9c1cc1146609986a32dd0b9a9e599e94d0ae8fb0cb50b6d12714d0dd1dd604 00:21:41.461 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:21:41.461 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.lFs 00:21:41.461 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0a9c1cc1146609986a32dd0b9a9e599e94d0ae8fb0cb50b6d12714d0dd1dd604 3 00:21:41.461 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0a9c1cc1146609986a32dd0b9a9e599e94d0ae8fb0cb50b6d12714d0dd1dd604 3 00:21:41.461 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:41.461 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:41.461 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0a9c1cc1146609986a32dd0b9a9e599e94d0ae8fb0cb50b6d12714d0dd1dd604 00:21:41.461 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:21:41.461 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:41.461 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.lFs 00:21:41.462 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.lFs 00:21:41.462 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.lFs 00:21:41.462 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:21:41.462 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:41.462 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:41.462 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:41.462 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:21:41.462 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:21:41.462 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:41.462 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ba733daf46c21f49030ca3231d10bf79 00:21:41.462 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:21:41.462 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.RyN 00:21:41.462 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ba733daf46c21f49030ca3231d10bf79 1 00:21:41.462 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ba733daf46c21f49030ca3231d10bf79 1 00:21:41.462 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:41.462 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:41.462 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ba733daf46c21f49030ca3231d10bf79 00:21:41.462 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:21:41.462 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:41.462 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.RyN 00:21:41.462 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.RyN 00:21:41.462 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.RyN 00:21:41.723 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:21:41.723 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:41.723 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:41.723 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:41.723 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:21:41.723 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:21:41.723 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:41.723 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=eed27db61a5e8472d0987546d7abb83becab7e7852d1304d 00:21:41.723 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:21:41.723 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.CbZ 00:21:41.723 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key eed27db61a5e8472d0987546d7abb83becab7e7852d1304d 2 00:21:41.723 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 eed27db61a5e8472d0987546d7abb83becab7e7852d1304d 2 00:21:41.723 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:41.723 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:41.723 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=eed27db61a5e8472d0987546d7abb83becab7e7852d1304d 00:21:41.723 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:21:41.723 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:41.723 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.CbZ 00:21:41.723 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.CbZ 00:21:41.723 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.CbZ 00:21:41.723 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:21:41.723 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:41.723 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:41.723 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:41.723 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:21:41.723 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:21:41.723 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:41.723 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6f2f41e1fcec35d1f18af381b8c7b84a566e7905a75a69b9 00:21:41.723 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:21:41.723 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.8mg 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6f2f41e1fcec35d1f18af381b8c7b84a566e7905a75a69b9 2 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6f2f41e1fcec35d1f18af381b8c7b84a566e7905a75a69b9 2 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6f2f41e1fcec35d1f18af381b8c7b84a566e7905a75a69b9 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.8mg 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.8mg 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.8mg 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=fb00bb228921d37c465ec26559d446d6 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.kJv 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key fb00bb228921d37c465ec26559d446d6 1 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 fb00bb228921d37c465ec26559d446d6 1 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=fb00bb228921d37c465ec26559d446d6 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.kJv 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.kJv 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.kJv 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=ce2652cf0e7175940620cb388f16142c362f7adcb90a437658de6df5120591be 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.no1 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key ce2652cf0e7175940620cb388f16142c362f7adcb90a437658de6df5120591be 3 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 ce2652cf0e7175940620cb388f16142c362f7adcb90a437658de6df5120591be 3 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=ce2652cf0e7175940620cb388f16142c362f7adcb90a437658de6df5120591be 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.no1 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.no1 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.no1 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 75900 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 75900 ']' 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:41.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:41.724 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.983 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:41.983 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:41.983 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 75940 /var/tmp/host.sock 00:21:41.983 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 75940 ']' 00:21:41.983 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:21:41.983 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:41.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:21:41.983 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:21:41.983 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:41.983 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.242 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:42.242 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:42.242 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:21:42.242 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.242 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.242 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.242 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:42.242 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.cem 00:21:42.242 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.242 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.242 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.242 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.cem 00:21:42.242 12:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.cem 00:21:42.504 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.lFs ]] 00:21:42.504 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.lFs 00:21:42.504 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.504 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.504 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.504 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.lFs 00:21:42.504 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.lFs 00:21:42.504 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:42.504 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.RyN 00:21:42.504 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.504 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.504 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.504 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.RyN 00:21:42.504 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.RyN 00:21:42.766 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.CbZ ]] 00:21:42.766 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.CbZ 00:21:42.766 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.766 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.766 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.766 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.CbZ 00:21:42.766 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.CbZ 00:21:43.028 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:43.028 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.8mg 00:21:43.028 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.028 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.028 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.028 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.8mg 00:21:43.028 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.8mg 00:21:43.289 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.kJv ]] 00:21:43.290 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.kJv 00:21:43.290 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.290 12:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.290 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.290 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.kJv 00:21:43.290 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.kJv 00:21:43.549 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:43.549 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.no1 00:21:43.549 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.549 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.549 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.549 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.no1 00:21:43.549 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.no1 00:21:43.549 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:21:43.549 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:43.550 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:43.550 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:43.550 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:43.810 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:43.810 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:21:43.810 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:43.810 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:43.810 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:43.810 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:43.810 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.810 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:43.810 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.810 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.810 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.810 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:43.810 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:43.810 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:44.072 00:21:44.072 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:44.072 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:44.072 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.331 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.331 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.331 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.331 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.331 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.331 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:44.331 { 00:21:44.331 "auth": { 00:21:44.331 "dhgroup": "null", 00:21:44.331 "digest": "sha256", 00:21:44.331 "state": "completed" 00:21:44.331 }, 00:21:44.331 "cntlid": 1, 00:21:44.331 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:21:44.331 "listen_address": { 00:21:44.331 "adrfam": "IPv4", 00:21:44.331 "traddr": "10.0.0.3", 00:21:44.331 "trsvcid": "4420", 00:21:44.331 "trtype": "TCP" 00:21:44.331 }, 00:21:44.331 "peer_address": { 00:21:44.331 "adrfam": "IPv4", 00:21:44.331 "traddr": "10.0.0.1", 00:21:44.331 "trsvcid": "58740", 00:21:44.331 "trtype": "TCP" 00:21:44.331 }, 00:21:44.331 "qid": 0, 00:21:44.331 "state": "enabled", 00:21:44.331 "thread": "nvmf_tgt_poll_group_000" 00:21:44.331 } 00:21:44.331 ]' 00:21:44.331 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:44.331 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:44.331 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:44.331 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:44.331 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:44.591 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.591 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.591 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.591 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWI1NWY5YjdhYzZkMzBmZWJlZDRiMjEwNTk0N2MwYmJiYzVkYjIwMzdkNjRjMDgzTJxClg==: --dhchap-ctrl-secret DHHC-1:03:MGE5YzFjYzExNDY2MDk5ODZhMzJkZDBiOWE5ZTU5OWU5NGQwYWU4ZmIwY2I1MGI2ZDEyNzE0ZDBkZDFkZDYwNLfIQ3U=: 00:21:44.591 12:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:00:YWI1NWY5YjdhYzZkMzBmZWJlZDRiMjEwNTk0N2MwYmJiYzVkYjIwMzdkNjRjMDgzTJxClg==: --dhchap-ctrl-secret DHHC-1:03:MGE5YzFjYzExNDY2MDk5ODZhMzJkZDBiOWE5ZTU5OWU5NGQwYWU4ZmIwY2I1MGI2ZDEyNzE0ZDBkZDFkZDYwNLfIQ3U=: 00:21:48.800 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.800 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:21:48.800 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.800 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.800 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.800 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:48.800 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:48.800 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:49.058 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:21:49.058 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:49.058 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:49.058 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:49.058 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:49.058 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.058 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.058 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.058 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.058 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.058 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.058 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.058 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.316 00:21:49.316 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:49.316 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:49.316 12:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.316 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.316 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.316 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.316 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.574 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.574 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:49.574 { 00:21:49.574 "auth": { 00:21:49.574 "dhgroup": "null", 00:21:49.574 "digest": "sha256", 00:21:49.574 "state": "completed" 00:21:49.574 }, 00:21:49.574 "cntlid": 3, 00:21:49.574 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:21:49.574 "listen_address": { 00:21:49.574 "adrfam": "IPv4", 00:21:49.574 "traddr": "10.0.0.3", 00:21:49.574 "trsvcid": "4420", 00:21:49.574 "trtype": "TCP" 00:21:49.574 }, 00:21:49.574 "peer_address": { 00:21:49.574 "adrfam": "IPv4", 00:21:49.574 "traddr": "10.0.0.1", 00:21:49.574 "trsvcid": "41258", 00:21:49.574 "trtype": "TCP" 00:21:49.574 }, 00:21:49.574 "qid": 0, 00:21:49.574 "state": "enabled", 00:21:49.574 "thread": "nvmf_tgt_poll_group_000" 00:21:49.574 } 00:21:49.574 ]' 00:21:49.574 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:49.574 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:49.574 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:49.574 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:49.574 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:49.574 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.574 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.574 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.832 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmE3MzNkYWY0NmMyMWY0OTAzMGNhMzIzMWQxMGJmNzm5Zgcu: --dhchap-ctrl-secret DHHC-1:02:ZWVkMjdkYjYxYTVlODQ3MmQwOTg3NTQ2ZDdhYmI4M2JlY2FiN2U3ODUyZDEzMDRkpA1b6w==: 00:21:49.832 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:01:YmE3MzNkYWY0NmMyMWY0OTAzMGNhMzIzMWQxMGJmNzm5Zgcu: --dhchap-ctrl-secret DHHC-1:02:ZWVkMjdkYjYxYTVlODQ3MmQwOTg3NTQ2ZDdhYmI4M2JlY2FiN2U3ODUyZDEzMDRkpA1b6w==: 00:21:50.429 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.429 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.429 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:21:50.429 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.429 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.429 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.429 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:50.429 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:50.429 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:50.689 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:21:50.689 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:50.689 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:50.689 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:50.689 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:50.689 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.689 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:50.689 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.689 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.689 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.689 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:50.689 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:50.689 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:50.947 00:21:50.947 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:50.947 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:50.947 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:51.204 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:51.204 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:51.204 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.204 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.204 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.204 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:51.204 { 00:21:51.204 "auth": { 00:21:51.204 "dhgroup": "null", 00:21:51.204 "digest": "sha256", 00:21:51.204 "state": "completed" 00:21:51.204 }, 00:21:51.204 "cntlid": 5, 00:21:51.204 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:21:51.204 "listen_address": { 00:21:51.204 "adrfam": "IPv4", 00:21:51.204 "traddr": "10.0.0.3", 00:21:51.204 "trsvcid": "4420", 00:21:51.204 "trtype": "TCP" 00:21:51.204 }, 00:21:51.204 "peer_address": { 00:21:51.204 "adrfam": "IPv4", 00:21:51.204 "traddr": "10.0.0.1", 00:21:51.204 "trsvcid": "41290", 00:21:51.204 "trtype": "TCP" 00:21:51.204 }, 00:21:51.204 "qid": 0, 00:21:51.204 "state": "enabled", 00:21:51.204 "thread": "nvmf_tgt_poll_group_000" 00:21:51.204 } 00:21:51.204 ]' 00:21:51.204 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:51.204 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:51.204 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:51.204 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:51.204 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:51.204 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.204 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.204 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.461 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmYyZjQxZTFmY2VjMzVkMWYxOGFmMzgxYjhjN2I4NGE1NjZlNzkwNWE3NWE2OWI5FkTRXA==: --dhchap-ctrl-secret DHHC-1:01:ZmIwMGJiMjI4OTIxZDM3YzQ2NWVjMjY1NTlkNDQ2ZDYpEJql: 00:21:51.461 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:02:NmYyZjQxZTFmY2VjMzVkMWYxOGFmMzgxYjhjN2I4NGE1NjZlNzkwNWE3NWE2OWI5FkTRXA==: --dhchap-ctrl-secret DHHC-1:01:ZmIwMGJiMjI4OTIxZDM3YzQ2NWVjMjY1NTlkNDQ2ZDYpEJql: 00:21:52.028 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:52.028 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:52.028 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:21:52.028 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.028 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.028 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.028 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:52.028 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:52.028 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:52.287 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:21:52.287 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:52.287 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:52.287 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:52.287 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:52.287 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.287 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key3 00:21:52.287 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.287 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.287 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.287 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:52.287 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:52.287 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:52.545 00:21:52.545 12:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:52.545 12:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:52.545 12:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.816 12:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.816 12:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.816 12:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.816 12:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.816 12:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.816 12:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:52.816 { 00:21:52.816 "auth": { 00:21:52.816 "dhgroup": "null", 00:21:52.816 "digest": "sha256", 00:21:52.816 "state": "completed" 00:21:52.816 }, 00:21:52.816 "cntlid": 7, 00:21:52.816 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:21:52.816 "listen_address": { 00:21:52.816 "adrfam": "IPv4", 00:21:52.816 "traddr": "10.0.0.3", 00:21:52.816 "trsvcid": "4420", 00:21:52.816 "trtype": "TCP" 00:21:52.816 }, 00:21:52.816 "peer_address": { 00:21:52.816 "adrfam": "IPv4", 00:21:52.816 "traddr": "10.0.0.1", 00:21:52.816 "trsvcid": "41322", 00:21:52.816 "trtype": "TCP" 00:21:52.816 }, 00:21:52.816 "qid": 0, 00:21:52.816 "state": "enabled", 00:21:52.816 "thread": "nvmf_tgt_poll_group_000" 00:21:52.816 } 00:21:52.816 ]' 00:21:52.816 12:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:52.816 12:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:52.816 12:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:52.816 12:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:52.816 12:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:52.816 12:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.816 12:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.816 12:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.087 12:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2UyNjUyY2YwZTcxNzU5NDA2MjBjYjM4OGYxNjE0MmMzNjJmN2FkY2I5MGE0Mzc2NThkZTZkZjUxMjA1OTFiZRcJPQM=: 00:21:53.087 12:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:03:Y2UyNjUyY2YwZTcxNzU5NDA2MjBjYjM4OGYxNjE0MmMzNjJmN2FkY2I5MGE0Mzc2NThkZTZkZjUxMjA1OTFiZRcJPQM=: 00:21:53.653 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.653 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:21:53.653 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.653 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.653 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.653 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:53.653 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:53.653 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:53.653 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:53.910 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:21:53.910 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:53.910 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:53.910 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:53.910 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:53.910 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.910 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:53.910 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.910 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.910 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.910 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:53.910 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:53.910 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:54.167 00:21:54.167 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:54.167 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.167 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:54.425 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.425 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.425 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.425 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.425 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.425 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:54.425 { 00:21:54.425 "auth": { 00:21:54.426 "dhgroup": "ffdhe2048", 00:21:54.426 "digest": "sha256", 00:21:54.426 "state": "completed" 00:21:54.426 }, 00:21:54.426 "cntlid": 9, 00:21:54.426 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:21:54.426 "listen_address": { 00:21:54.426 "adrfam": "IPv4", 00:21:54.426 "traddr": "10.0.0.3", 00:21:54.426 "trsvcid": "4420", 00:21:54.426 "trtype": "TCP" 00:21:54.426 }, 00:21:54.426 "peer_address": { 00:21:54.426 "adrfam": "IPv4", 00:21:54.426 "traddr": "10.0.0.1", 00:21:54.426 "trsvcid": "41358", 00:21:54.426 "trtype": "TCP" 00:21:54.426 }, 00:21:54.426 "qid": 0, 00:21:54.426 "state": "enabled", 00:21:54.426 "thread": "nvmf_tgt_poll_group_000" 00:21:54.426 } 00:21:54.426 ]' 00:21:54.426 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:54.426 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:54.426 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:54.426 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:54.426 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:54.426 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.426 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.426 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.683 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWI1NWY5YjdhYzZkMzBmZWJlZDRiMjEwNTk0N2MwYmJiYzVkYjIwMzdkNjRjMDgzTJxClg==: --dhchap-ctrl-secret DHHC-1:03:MGE5YzFjYzExNDY2MDk5ODZhMzJkZDBiOWE5ZTU5OWU5NGQwYWU4ZmIwY2I1MGI2ZDEyNzE0ZDBkZDFkZDYwNLfIQ3U=: 00:21:54.683 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:00:YWI1NWY5YjdhYzZkMzBmZWJlZDRiMjEwNTk0N2MwYmJiYzVkYjIwMzdkNjRjMDgzTJxClg==: --dhchap-ctrl-secret DHHC-1:03:MGE5YzFjYzExNDY2MDk5ODZhMzJkZDBiOWE5ZTU5OWU5NGQwYWU4ZmIwY2I1MGI2ZDEyNzE0ZDBkZDFkZDYwNLfIQ3U=: 00:21:55.261 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.261 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:21:55.261 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.261 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.261 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.261 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:55.261 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:55.261 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:55.520 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:21:55.520 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:55.520 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:55.520 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:55.520 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:55.520 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.520 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:55.520 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.520 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.520 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.520 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:55.520 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:55.520 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:55.777 00:21:55.777 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:55.777 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.777 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:56.034 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.034 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.034 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.034 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.034 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.034 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:56.034 { 00:21:56.034 "auth": { 00:21:56.034 "dhgroup": "ffdhe2048", 00:21:56.034 "digest": "sha256", 00:21:56.034 "state": "completed" 00:21:56.034 }, 00:21:56.034 "cntlid": 11, 00:21:56.034 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:21:56.034 "listen_address": { 00:21:56.034 "adrfam": "IPv4", 00:21:56.034 "traddr": "10.0.0.3", 00:21:56.034 "trsvcid": "4420", 00:21:56.034 "trtype": "TCP" 00:21:56.034 }, 00:21:56.034 "peer_address": { 00:21:56.034 "adrfam": "IPv4", 00:21:56.034 "traddr": "10.0.0.1", 00:21:56.034 "trsvcid": "41386", 00:21:56.034 "trtype": "TCP" 00:21:56.035 }, 00:21:56.035 "qid": 0, 00:21:56.035 "state": "enabled", 00:21:56.035 "thread": "nvmf_tgt_poll_group_000" 00:21:56.035 } 00:21:56.035 ]' 00:21:56.035 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:56.035 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:56.035 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:56.035 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:56.035 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:56.035 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.035 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.035 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.292 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmE3MzNkYWY0NmMyMWY0OTAzMGNhMzIzMWQxMGJmNzm5Zgcu: --dhchap-ctrl-secret DHHC-1:02:ZWVkMjdkYjYxYTVlODQ3MmQwOTg3NTQ2ZDdhYmI4M2JlY2FiN2U3ODUyZDEzMDRkpA1b6w==: 00:21:56.292 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:01:YmE3MzNkYWY0NmMyMWY0OTAzMGNhMzIzMWQxMGJmNzm5Zgcu: --dhchap-ctrl-secret DHHC-1:02:ZWVkMjdkYjYxYTVlODQ3MmQwOTg3NTQ2ZDdhYmI4M2JlY2FiN2U3ODUyZDEzMDRkpA1b6w==: 00:21:56.858 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.858 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.858 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:21:56.858 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.858 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.858 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.858 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:56.858 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:56.858 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:57.117 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:21:57.117 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:57.117 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:57.117 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:57.117 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:57.117 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.117 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:57.117 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.117 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.117 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.117 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:57.117 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:57.117 12:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:57.377 00:21:57.377 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:57.377 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:57.377 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.638 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.638 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.638 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.638 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.638 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.638 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:57.638 { 00:21:57.638 "auth": { 00:21:57.638 "dhgroup": "ffdhe2048", 00:21:57.638 "digest": "sha256", 00:21:57.638 "state": "completed" 00:21:57.638 }, 00:21:57.638 "cntlid": 13, 00:21:57.638 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:21:57.638 "listen_address": { 00:21:57.638 "adrfam": "IPv4", 00:21:57.638 "traddr": "10.0.0.3", 00:21:57.638 "trsvcid": "4420", 00:21:57.638 "trtype": "TCP" 00:21:57.638 }, 00:21:57.638 "peer_address": { 00:21:57.638 "adrfam": "IPv4", 00:21:57.638 "traddr": "10.0.0.1", 00:21:57.638 "trsvcid": "41418", 00:21:57.638 "trtype": "TCP" 00:21:57.638 }, 00:21:57.638 "qid": 0, 00:21:57.638 "state": "enabled", 00:21:57.638 "thread": "nvmf_tgt_poll_group_000" 00:21:57.638 } 00:21:57.638 ]' 00:21:57.638 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:57.638 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:57.638 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:57.638 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:57.638 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:57.638 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.638 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.638 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.901 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmYyZjQxZTFmY2VjMzVkMWYxOGFmMzgxYjhjN2I4NGE1NjZlNzkwNWE3NWE2OWI5FkTRXA==: --dhchap-ctrl-secret DHHC-1:01:ZmIwMGJiMjI4OTIxZDM3YzQ2NWVjMjY1NTlkNDQ2ZDYpEJql: 00:21:57.901 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:02:NmYyZjQxZTFmY2VjMzVkMWYxOGFmMzgxYjhjN2I4NGE1NjZlNzkwNWE3NWE2OWI5FkTRXA==: --dhchap-ctrl-secret DHHC-1:01:ZmIwMGJiMjI4OTIxZDM3YzQ2NWVjMjY1NTlkNDQ2ZDYpEJql: 00:21:58.471 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.471 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.471 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:21:58.471 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.471 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.471 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.471 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:58.471 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:58.471 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:58.733 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:21:58.733 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:58.733 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:58.733 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:58.733 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:58.733 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:58.733 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key3 00:21:58.733 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.733 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.733 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.733 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:58.733 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:58.733 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:58.994 00:21:58.994 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:58.994 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:58.994 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.256 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.256 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.256 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.256 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.256 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.256 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:59.256 { 00:21:59.256 "auth": { 00:21:59.256 "dhgroup": "ffdhe2048", 00:21:59.256 "digest": "sha256", 00:21:59.256 "state": "completed" 00:21:59.256 }, 00:21:59.256 "cntlid": 15, 00:21:59.256 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:21:59.256 "listen_address": { 00:21:59.256 "adrfam": "IPv4", 00:21:59.256 "traddr": "10.0.0.3", 00:21:59.256 "trsvcid": "4420", 00:21:59.256 "trtype": "TCP" 00:21:59.256 }, 00:21:59.256 "peer_address": { 00:21:59.256 "adrfam": "IPv4", 00:21:59.256 "traddr": "10.0.0.1", 00:21:59.256 "trsvcid": "46288", 00:21:59.256 "trtype": "TCP" 00:21:59.256 }, 00:21:59.256 "qid": 0, 00:21:59.256 "state": "enabled", 00:21:59.256 "thread": "nvmf_tgt_poll_group_000" 00:21:59.256 } 00:21:59.256 ]' 00:21:59.256 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:59.256 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:59.256 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:59.256 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:59.256 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:59.256 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.256 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.256 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.514 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2UyNjUyY2YwZTcxNzU5NDA2MjBjYjM4OGYxNjE0MmMzNjJmN2FkY2I5MGE0Mzc2NThkZTZkZjUxMjA1OTFiZRcJPQM=: 00:21:59.514 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:03:Y2UyNjUyY2YwZTcxNzU5NDA2MjBjYjM4OGYxNjE0MmMzNjJmN2FkY2I5MGE0Mzc2NThkZTZkZjUxMjA1OTFiZRcJPQM=: 00:22:00.082 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:00.082 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:00.082 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:22:00.082 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.082 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.082 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.082 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:00.082 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:00.082 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:00.082 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:00.341 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:22:00.341 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:00.341 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:00.341 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:00.341 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:00.341 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:00.341 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.341 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.341 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.341 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.341 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.341 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.341 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:00.598 00:22:00.598 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:00.598 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.598 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:00.854 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.854 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:00.854 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.854 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.854 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.854 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:00.854 { 00:22:00.854 "auth": { 00:22:00.854 "dhgroup": "ffdhe3072", 00:22:00.854 "digest": "sha256", 00:22:00.854 "state": "completed" 00:22:00.854 }, 00:22:00.854 "cntlid": 17, 00:22:00.854 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:22:00.854 "listen_address": { 00:22:00.854 "adrfam": "IPv4", 00:22:00.854 "traddr": "10.0.0.3", 00:22:00.854 "trsvcid": "4420", 00:22:00.854 "trtype": "TCP" 00:22:00.854 }, 00:22:00.854 "peer_address": { 00:22:00.854 "adrfam": "IPv4", 00:22:00.854 "traddr": "10.0.0.1", 00:22:00.854 "trsvcid": "46326", 00:22:00.854 "trtype": "TCP" 00:22:00.854 }, 00:22:00.854 "qid": 0, 00:22:00.854 "state": "enabled", 00:22:00.854 "thread": "nvmf_tgt_poll_group_000" 00:22:00.854 } 00:22:00.854 ]' 00:22:00.854 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:00.854 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:00.854 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:00.854 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:00.854 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:01.112 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:01.112 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:01.112 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.112 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWI1NWY5YjdhYzZkMzBmZWJlZDRiMjEwNTk0N2MwYmJiYzVkYjIwMzdkNjRjMDgzTJxClg==: --dhchap-ctrl-secret DHHC-1:03:MGE5YzFjYzExNDY2MDk5ODZhMzJkZDBiOWE5ZTU5OWU5NGQwYWU4ZmIwY2I1MGI2ZDEyNzE0ZDBkZDFkZDYwNLfIQ3U=: 00:22:01.112 12:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:00:YWI1NWY5YjdhYzZkMzBmZWJlZDRiMjEwNTk0N2MwYmJiYzVkYjIwMzdkNjRjMDgzTJxClg==: --dhchap-ctrl-secret DHHC-1:03:MGE5YzFjYzExNDY2MDk5ODZhMzJkZDBiOWE5ZTU5OWU5NGQwYWU4ZmIwY2I1MGI2ZDEyNzE0ZDBkZDFkZDYwNLfIQ3U=: 00:22:01.678 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.678 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.678 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:22:01.678 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.678 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.937 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.937 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:01.937 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:01.937 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:01.937 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:22:01.937 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:01.937 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:01.937 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:01.937 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:01.937 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:01.937 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.937 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.937 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.937 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.937 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.938 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.938 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.505 00:22:02.505 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:02.505 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:02.505 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.505 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.505 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.505 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.505 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.505 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.505 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:02.505 { 00:22:02.505 "auth": { 00:22:02.505 "dhgroup": "ffdhe3072", 00:22:02.505 "digest": "sha256", 00:22:02.505 "state": "completed" 00:22:02.505 }, 00:22:02.505 "cntlid": 19, 00:22:02.505 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:22:02.505 "listen_address": { 00:22:02.505 "adrfam": "IPv4", 00:22:02.505 "traddr": "10.0.0.3", 00:22:02.505 "trsvcid": "4420", 00:22:02.505 "trtype": "TCP" 00:22:02.505 }, 00:22:02.505 "peer_address": { 00:22:02.505 "adrfam": "IPv4", 00:22:02.505 "traddr": "10.0.0.1", 00:22:02.505 "trsvcid": "46346", 00:22:02.506 "trtype": "TCP" 00:22:02.506 }, 00:22:02.506 "qid": 0, 00:22:02.506 "state": "enabled", 00:22:02.506 "thread": "nvmf_tgt_poll_group_000" 00:22:02.506 } 00:22:02.506 ]' 00:22:02.506 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:02.506 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:02.764 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:02.764 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:02.764 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:02.765 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.765 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.765 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.022 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmE3MzNkYWY0NmMyMWY0OTAzMGNhMzIzMWQxMGJmNzm5Zgcu: --dhchap-ctrl-secret DHHC-1:02:ZWVkMjdkYjYxYTVlODQ3MmQwOTg3NTQ2ZDdhYmI4M2JlY2FiN2U3ODUyZDEzMDRkpA1b6w==: 00:22:03.022 12:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:01:YmE3MzNkYWY0NmMyMWY0OTAzMGNhMzIzMWQxMGJmNzm5Zgcu: --dhchap-ctrl-secret DHHC-1:02:ZWVkMjdkYjYxYTVlODQ3MmQwOTg3NTQ2ZDdhYmI4M2JlY2FiN2U3ODUyZDEzMDRkpA1b6w==: 00:22:03.587 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:03.587 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:03.587 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:22:03.587 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.587 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.587 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.587 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:03.587 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:03.587 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:03.846 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:22:03.846 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:03.846 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:03.846 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:03.846 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:03.846 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:03.846 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:03.846 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.846 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.846 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.846 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:03.846 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:03.846 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:04.105 00:22:04.105 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:04.105 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:04.105 12:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:04.363 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.363 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:04.363 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.363 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.363 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.363 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:04.363 { 00:22:04.363 "auth": { 00:22:04.363 "dhgroup": "ffdhe3072", 00:22:04.363 "digest": "sha256", 00:22:04.363 "state": "completed" 00:22:04.363 }, 00:22:04.363 "cntlid": 21, 00:22:04.363 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:22:04.363 "listen_address": { 00:22:04.363 "adrfam": "IPv4", 00:22:04.363 "traddr": "10.0.0.3", 00:22:04.363 "trsvcid": "4420", 00:22:04.363 "trtype": "TCP" 00:22:04.363 }, 00:22:04.363 "peer_address": { 00:22:04.363 "adrfam": "IPv4", 00:22:04.363 "traddr": "10.0.0.1", 00:22:04.363 "trsvcid": "46368", 00:22:04.363 "trtype": "TCP" 00:22:04.363 }, 00:22:04.363 "qid": 0, 00:22:04.363 "state": "enabled", 00:22:04.363 "thread": "nvmf_tgt_poll_group_000" 00:22:04.363 } 00:22:04.363 ]' 00:22:04.363 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:04.363 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:04.363 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:04.363 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:04.363 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:04.363 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:04.363 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.363 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.621 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmYyZjQxZTFmY2VjMzVkMWYxOGFmMzgxYjhjN2I4NGE1NjZlNzkwNWE3NWE2OWI5FkTRXA==: --dhchap-ctrl-secret DHHC-1:01:ZmIwMGJiMjI4OTIxZDM3YzQ2NWVjMjY1NTlkNDQ2ZDYpEJql: 00:22:04.621 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:02:NmYyZjQxZTFmY2VjMzVkMWYxOGFmMzgxYjhjN2I4NGE1NjZlNzkwNWE3NWE2OWI5FkTRXA==: --dhchap-ctrl-secret DHHC-1:01:ZmIwMGJiMjI4OTIxZDM3YzQ2NWVjMjY1NTlkNDQ2ZDYpEJql: 00:22:05.188 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.188 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.188 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:22:05.188 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.188 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.188 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.188 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:05.188 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:05.188 12:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:22:05.488 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:22:05.488 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:05.488 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:05.488 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:05.488 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:05.488 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:05.488 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key3 00:22:05.488 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.489 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.489 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.489 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:05.489 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:05.489 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:05.489 00:22:05.762 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:05.762 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:05.762 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:05.762 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.762 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:05.762 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.762 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.762 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.762 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:05.762 { 00:22:05.762 "auth": { 00:22:05.762 "dhgroup": "ffdhe3072", 00:22:05.762 "digest": "sha256", 00:22:05.762 "state": "completed" 00:22:05.762 }, 00:22:05.762 "cntlid": 23, 00:22:05.762 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:22:05.762 "listen_address": { 00:22:05.762 "adrfam": "IPv4", 00:22:05.763 "traddr": "10.0.0.3", 00:22:05.763 "trsvcid": "4420", 00:22:05.763 "trtype": "TCP" 00:22:05.763 }, 00:22:05.763 "peer_address": { 00:22:05.763 "adrfam": "IPv4", 00:22:05.763 "traddr": "10.0.0.1", 00:22:05.763 "trsvcid": "46404", 00:22:05.763 "trtype": "TCP" 00:22:05.763 }, 00:22:05.763 "qid": 0, 00:22:05.763 "state": "enabled", 00:22:05.763 "thread": "nvmf_tgt_poll_group_000" 00:22:05.763 } 00:22:05.763 ]' 00:22:05.763 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:05.763 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:05.763 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:06.020 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:06.020 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:06.020 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:06.020 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.020 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.020 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2UyNjUyY2YwZTcxNzU5NDA2MjBjYjM4OGYxNjE0MmMzNjJmN2FkY2I5MGE0Mzc2NThkZTZkZjUxMjA1OTFiZRcJPQM=: 00:22:06.020 12:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:03:Y2UyNjUyY2YwZTcxNzU5NDA2MjBjYjM4OGYxNjE0MmMzNjJmN2FkY2I5MGE0Mzc2NThkZTZkZjUxMjA1OTFiZRcJPQM=: 00:22:06.586 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:06.586 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:06.586 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:22:06.586 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.586 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.586 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.586 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:06.586 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:06.586 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:06.586 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:06.844 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:22:06.844 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:06.844 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:06.844 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:06.844 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:06.844 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:06.844 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:06.844 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.844 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.844 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.844 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:06.844 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:06.844 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.102 00:22:07.102 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:07.102 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.102 12:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:07.359 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.359 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:07.359 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.359 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.359 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.359 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:07.359 { 00:22:07.359 "auth": { 00:22:07.359 "dhgroup": "ffdhe4096", 00:22:07.359 "digest": "sha256", 00:22:07.359 "state": "completed" 00:22:07.359 }, 00:22:07.359 "cntlid": 25, 00:22:07.359 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:22:07.359 "listen_address": { 00:22:07.359 "adrfam": "IPv4", 00:22:07.359 "traddr": "10.0.0.3", 00:22:07.359 "trsvcid": "4420", 00:22:07.359 "trtype": "TCP" 00:22:07.359 }, 00:22:07.359 "peer_address": { 00:22:07.359 "adrfam": "IPv4", 00:22:07.359 "traddr": "10.0.0.1", 00:22:07.359 "trsvcid": "46426", 00:22:07.359 "trtype": "TCP" 00:22:07.359 }, 00:22:07.359 "qid": 0, 00:22:07.359 "state": "enabled", 00:22:07.359 "thread": "nvmf_tgt_poll_group_000" 00:22:07.359 } 00:22:07.359 ]' 00:22:07.359 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:07.360 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:07.360 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:07.360 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:07.360 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:07.617 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:07.617 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:07.617 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:07.617 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWI1NWY5YjdhYzZkMzBmZWJlZDRiMjEwNTk0N2MwYmJiYzVkYjIwMzdkNjRjMDgzTJxClg==: --dhchap-ctrl-secret DHHC-1:03:MGE5YzFjYzExNDY2MDk5ODZhMzJkZDBiOWE5ZTU5OWU5NGQwYWU4ZmIwY2I1MGI2ZDEyNzE0ZDBkZDFkZDYwNLfIQ3U=: 00:22:07.617 12:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:00:YWI1NWY5YjdhYzZkMzBmZWJlZDRiMjEwNTk0N2MwYmJiYzVkYjIwMzdkNjRjMDgzTJxClg==: --dhchap-ctrl-secret DHHC-1:03:MGE5YzFjYzExNDY2MDk5ODZhMzJkZDBiOWE5ZTU5OWU5NGQwYWU4ZmIwY2I1MGI2ZDEyNzE0ZDBkZDFkZDYwNLfIQ3U=: 00:22:08.183 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:08.183 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:08.183 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:22:08.183 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.183 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.183 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.183 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:08.183 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:08.183 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:08.441 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:22:08.441 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:08.441 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:08.441 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:08.441 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:08.441 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:08.441 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:08.441 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.441 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.441 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.441 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:08.441 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:08.441 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:08.699 00:22:08.957 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:08.957 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:08.957 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.957 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.957 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.957 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.957 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.957 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.957 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:08.957 { 00:22:08.957 "auth": { 00:22:08.957 "dhgroup": "ffdhe4096", 00:22:08.957 "digest": "sha256", 00:22:08.957 "state": "completed" 00:22:08.957 }, 00:22:08.957 "cntlid": 27, 00:22:08.957 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:22:08.957 "listen_address": { 00:22:08.957 "adrfam": "IPv4", 00:22:08.957 "traddr": "10.0.0.3", 00:22:08.957 "trsvcid": "4420", 00:22:08.957 "trtype": "TCP" 00:22:08.957 }, 00:22:08.957 "peer_address": { 00:22:08.957 "adrfam": "IPv4", 00:22:08.957 "traddr": "10.0.0.1", 00:22:08.957 "trsvcid": "46460", 00:22:08.957 "trtype": "TCP" 00:22:08.957 }, 00:22:08.957 "qid": 0, 00:22:08.957 "state": "enabled", 00:22:08.957 "thread": "nvmf_tgt_poll_group_000" 00:22:08.957 } 00:22:08.957 ]' 00:22:08.957 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:08.957 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:08.957 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:09.216 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:09.216 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:09.216 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:09.216 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.216 12:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.477 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmE3MzNkYWY0NmMyMWY0OTAzMGNhMzIzMWQxMGJmNzm5Zgcu: --dhchap-ctrl-secret DHHC-1:02:ZWVkMjdkYjYxYTVlODQ3MmQwOTg3NTQ2ZDdhYmI4M2JlY2FiN2U3ODUyZDEzMDRkpA1b6w==: 00:22:09.477 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:01:YmE3MzNkYWY0NmMyMWY0OTAzMGNhMzIzMWQxMGJmNzm5Zgcu: --dhchap-ctrl-secret DHHC-1:02:ZWVkMjdkYjYxYTVlODQ3MmQwOTg3NTQ2ZDdhYmI4M2JlY2FiN2U3ODUyZDEzMDRkpA1b6w==: 00:22:10.043 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:10.043 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:10.043 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:22:10.043 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.043 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.043 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.043 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:10.043 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:10.043 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:10.043 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:22:10.043 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:10.043 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:10.043 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:10.043 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:10.043 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:10.043 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:10.043 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.043 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.043 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.043 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:10.043 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:10.044 12:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:10.610 00:22:10.610 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:10.610 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.610 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:10.610 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.610 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:10.610 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.610 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.610 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.610 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:10.610 { 00:22:10.610 "auth": { 00:22:10.610 "dhgroup": "ffdhe4096", 00:22:10.610 "digest": "sha256", 00:22:10.610 "state": "completed" 00:22:10.610 }, 00:22:10.610 "cntlid": 29, 00:22:10.610 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:22:10.610 "listen_address": { 00:22:10.610 "adrfam": "IPv4", 00:22:10.610 "traddr": "10.0.0.3", 00:22:10.610 "trsvcid": "4420", 00:22:10.610 "trtype": "TCP" 00:22:10.610 }, 00:22:10.610 "peer_address": { 00:22:10.610 "adrfam": "IPv4", 00:22:10.610 "traddr": "10.0.0.1", 00:22:10.610 "trsvcid": "49632", 00:22:10.610 "trtype": "TCP" 00:22:10.610 }, 00:22:10.610 "qid": 0, 00:22:10.610 "state": "enabled", 00:22:10.610 "thread": "nvmf_tgt_poll_group_000" 00:22:10.610 } 00:22:10.610 ]' 00:22:10.610 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:10.868 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:10.868 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:10.868 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:10.868 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:10.868 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:10.868 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.868 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.126 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmYyZjQxZTFmY2VjMzVkMWYxOGFmMzgxYjhjN2I4NGE1NjZlNzkwNWE3NWE2OWI5FkTRXA==: --dhchap-ctrl-secret DHHC-1:01:ZmIwMGJiMjI4OTIxZDM3YzQ2NWVjMjY1NTlkNDQ2ZDYpEJql: 00:22:11.126 12:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:02:NmYyZjQxZTFmY2VjMzVkMWYxOGFmMzgxYjhjN2I4NGE1NjZlNzkwNWE3NWE2OWI5FkTRXA==: --dhchap-ctrl-secret DHHC-1:01:ZmIwMGJiMjI4OTIxZDM3YzQ2NWVjMjY1NTlkNDQ2ZDYpEJql: 00:22:11.700 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:11.700 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:11.700 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:22:11.700 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.700 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.700 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.700 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:11.700 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:11.700 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:22:11.973 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:22:11.973 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:11.973 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:11.973 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:11.973 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:11.973 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:11.973 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key3 00:22:11.973 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.973 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.973 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.973 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:11.973 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:11.973 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:12.230 00:22:12.230 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:12.231 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:12.231 12:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:12.488 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.488 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:12.488 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.488 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.488 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.488 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:12.488 { 00:22:12.488 "auth": { 00:22:12.488 "dhgroup": "ffdhe4096", 00:22:12.488 "digest": "sha256", 00:22:12.488 "state": "completed" 00:22:12.488 }, 00:22:12.488 "cntlid": 31, 00:22:12.488 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:22:12.488 "listen_address": { 00:22:12.488 "adrfam": "IPv4", 00:22:12.488 "traddr": "10.0.0.3", 00:22:12.488 "trsvcid": "4420", 00:22:12.488 "trtype": "TCP" 00:22:12.488 }, 00:22:12.488 "peer_address": { 00:22:12.488 "adrfam": "IPv4", 00:22:12.488 "traddr": "10.0.0.1", 00:22:12.488 "trsvcid": "49664", 00:22:12.488 "trtype": "TCP" 00:22:12.488 }, 00:22:12.488 "qid": 0, 00:22:12.488 "state": "enabled", 00:22:12.488 "thread": "nvmf_tgt_poll_group_000" 00:22:12.488 } 00:22:12.488 ]' 00:22:12.488 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:12.488 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:12.488 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:12.488 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:12.488 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:12.488 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:12.488 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:12.488 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:12.746 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2UyNjUyY2YwZTcxNzU5NDA2MjBjYjM4OGYxNjE0MmMzNjJmN2FkY2I5MGE0Mzc2NThkZTZkZjUxMjA1OTFiZRcJPQM=: 00:22:12.746 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:03:Y2UyNjUyY2YwZTcxNzU5NDA2MjBjYjM4OGYxNjE0MmMzNjJmN2FkY2I5MGE0Mzc2NThkZTZkZjUxMjA1OTFiZRcJPQM=: 00:22:13.312 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:13.312 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:13.312 12:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:22:13.312 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.312 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.312 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.312 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:13.312 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:13.312 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:13.312 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:13.568 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:22:13.568 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:13.568 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:13.568 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:13.568 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:13.568 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:13.568 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:13.568 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.568 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.568 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.568 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:13.568 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:13.568 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:13.825 00:22:13.825 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:13.825 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.825 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:14.084 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:14.084 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:14.084 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.084 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.084 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.084 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:14.084 { 00:22:14.084 "auth": { 00:22:14.084 "dhgroup": "ffdhe6144", 00:22:14.084 "digest": "sha256", 00:22:14.084 "state": "completed" 00:22:14.084 }, 00:22:14.084 "cntlid": 33, 00:22:14.084 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:22:14.084 "listen_address": { 00:22:14.084 "adrfam": "IPv4", 00:22:14.084 "traddr": "10.0.0.3", 00:22:14.084 "trsvcid": "4420", 00:22:14.084 "trtype": "TCP" 00:22:14.084 }, 00:22:14.084 "peer_address": { 00:22:14.084 "adrfam": "IPv4", 00:22:14.084 "traddr": "10.0.0.1", 00:22:14.084 "trsvcid": "49696", 00:22:14.084 "trtype": "TCP" 00:22:14.084 }, 00:22:14.084 "qid": 0, 00:22:14.084 "state": "enabled", 00:22:14.084 "thread": "nvmf_tgt_poll_group_000" 00:22:14.084 } 00:22:14.084 ]' 00:22:14.084 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:14.084 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:14.084 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:14.084 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:14.084 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:14.084 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:14.084 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:14.084 12:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:14.342 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWI1NWY5YjdhYzZkMzBmZWJlZDRiMjEwNTk0N2MwYmJiYzVkYjIwMzdkNjRjMDgzTJxClg==: --dhchap-ctrl-secret DHHC-1:03:MGE5YzFjYzExNDY2MDk5ODZhMzJkZDBiOWE5ZTU5OWU5NGQwYWU4ZmIwY2I1MGI2ZDEyNzE0ZDBkZDFkZDYwNLfIQ3U=: 00:22:14.343 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:00:YWI1NWY5YjdhYzZkMzBmZWJlZDRiMjEwNTk0N2MwYmJiYzVkYjIwMzdkNjRjMDgzTJxClg==: --dhchap-ctrl-secret DHHC-1:03:MGE5YzFjYzExNDY2MDk5ODZhMzJkZDBiOWE5ZTU5OWU5NGQwYWU4ZmIwY2I1MGI2ZDEyNzE0ZDBkZDFkZDYwNLfIQ3U=: 00:22:14.908 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.908 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:22:14.908 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.908 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.908 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.909 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:14.909 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:14.909 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:15.166 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:22:15.166 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:15.166 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:15.166 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:15.166 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:15.166 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:15.166 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:15.166 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.166 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.166 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.166 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:15.166 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:15.166 12:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:15.732 00:22:15.732 12:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:15.732 12:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:15.732 12:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:15.732 12:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.732 12:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:15.732 12:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.732 12:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.732 12:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.732 12:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:15.732 { 00:22:15.732 "auth": { 00:22:15.732 "dhgroup": "ffdhe6144", 00:22:15.732 "digest": "sha256", 00:22:15.732 "state": "completed" 00:22:15.732 }, 00:22:15.732 "cntlid": 35, 00:22:15.732 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:22:15.732 "listen_address": { 00:22:15.732 "adrfam": "IPv4", 00:22:15.732 "traddr": "10.0.0.3", 00:22:15.732 "trsvcid": "4420", 00:22:15.732 "trtype": "TCP" 00:22:15.732 }, 00:22:15.732 "peer_address": { 00:22:15.732 "adrfam": "IPv4", 00:22:15.732 "traddr": "10.0.0.1", 00:22:15.732 "trsvcid": "49720", 00:22:15.732 "trtype": "TCP" 00:22:15.732 }, 00:22:15.732 "qid": 0, 00:22:15.732 "state": "enabled", 00:22:15.732 "thread": "nvmf_tgt_poll_group_000" 00:22:15.732 } 00:22:15.732 ]' 00:22:15.990 12:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:15.990 12:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:15.990 12:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:15.990 12:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:15.990 12:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:15.990 12:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:15.990 12:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:15.990 12:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:16.248 12:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmE3MzNkYWY0NmMyMWY0OTAzMGNhMzIzMWQxMGJmNzm5Zgcu: --dhchap-ctrl-secret DHHC-1:02:ZWVkMjdkYjYxYTVlODQ3MmQwOTg3NTQ2ZDdhYmI4M2JlY2FiN2U3ODUyZDEzMDRkpA1b6w==: 00:22:16.248 12:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:01:YmE3MzNkYWY0NmMyMWY0OTAzMGNhMzIzMWQxMGJmNzm5Zgcu: --dhchap-ctrl-secret DHHC-1:02:ZWVkMjdkYjYxYTVlODQ3MmQwOTg3NTQ2ZDdhYmI4M2JlY2FiN2U3ODUyZDEzMDRkpA1b6w==: 00:22:16.815 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:16.815 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:16.815 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:22:16.815 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.815 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.815 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.815 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:16.815 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:16.815 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:17.073 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:22:17.073 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:17.073 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:17.073 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:17.073 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:17.073 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:17.073 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.073 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.073 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.073 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.073 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.073 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.073 12:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:17.332 00:22:17.589 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:17.589 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:17.589 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:17.589 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.589 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:17.589 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.589 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.589 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.589 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:17.589 { 00:22:17.589 "auth": { 00:22:17.589 "dhgroup": "ffdhe6144", 00:22:17.589 "digest": "sha256", 00:22:17.589 "state": "completed" 00:22:17.589 }, 00:22:17.589 "cntlid": 37, 00:22:17.589 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:22:17.589 "listen_address": { 00:22:17.589 "adrfam": "IPv4", 00:22:17.589 "traddr": "10.0.0.3", 00:22:17.589 "trsvcid": "4420", 00:22:17.589 "trtype": "TCP" 00:22:17.589 }, 00:22:17.589 "peer_address": { 00:22:17.589 "adrfam": "IPv4", 00:22:17.589 "traddr": "10.0.0.1", 00:22:17.589 "trsvcid": "49750", 00:22:17.589 "trtype": "TCP" 00:22:17.589 }, 00:22:17.589 "qid": 0, 00:22:17.589 "state": "enabled", 00:22:17.589 "thread": "nvmf_tgt_poll_group_000" 00:22:17.589 } 00:22:17.589 ]' 00:22:17.589 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:17.847 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:17.847 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:17.847 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:17.847 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:17.847 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:17.847 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:17.847 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:18.105 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmYyZjQxZTFmY2VjMzVkMWYxOGFmMzgxYjhjN2I4NGE1NjZlNzkwNWE3NWE2OWI5FkTRXA==: --dhchap-ctrl-secret DHHC-1:01:ZmIwMGJiMjI4OTIxZDM3YzQ2NWVjMjY1NTlkNDQ2ZDYpEJql: 00:22:18.105 12:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:02:NmYyZjQxZTFmY2VjMzVkMWYxOGFmMzgxYjhjN2I4NGE1NjZlNzkwNWE3NWE2OWI5FkTRXA==: --dhchap-ctrl-secret DHHC-1:01:ZmIwMGJiMjI4OTIxZDM3YzQ2NWVjMjY1NTlkNDQ2ZDYpEJql: 00:22:18.675 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:18.675 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:18.675 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:22:18.675 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.675 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.675 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.675 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:18.675 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:18.675 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:18.933 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:22:18.933 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:18.933 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:18.933 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:18.933 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:18.933 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:18.933 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key3 00:22:18.933 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.933 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.933 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.933 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:18.933 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:18.933 12:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:19.191 00:22:19.191 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:19.191 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:19.191 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.450 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.450 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:19.450 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.450 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.450 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.450 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:19.450 { 00:22:19.450 "auth": { 00:22:19.450 "dhgroup": "ffdhe6144", 00:22:19.450 "digest": "sha256", 00:22:19.450 "state": "completed" 00:22:19.450 }, 00:22:19.450 "cntlid": 39, 00:22:19.450 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:22:19.450 "listen_address": { 00:22:19.450 "adrfam": "IPv4", 00:22:19.450 "traddr": "10.0.0.3", 00:22:19.450 "trsvcid": "4420", 00:22:19.450 "trtype": "TCP" 00:22:19.450 }, 00:22:19.450 "peer_address": { 00:22:19.450 "adrfam": "IPv4", 00:22:19.450 "traddr": "10.0.0.1", 00:22:19.450 "trsvcid": "47702", 00:22:19.450 "trtype": "TCP" 00:22:19.450 }, 00:22:19.450 "qid": 0, 00:22:19.450 "state": "enabled", 00:22:19.450 "thread": "nvmf_tgt_poll_group_000" 00:22:19.450 } 00:22:19.450 ]' 00:22:19.450 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:19.450 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:19.450 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:19.450 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:19.450 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:19.708 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:19.708 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:19.708 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.708 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2UyNjUyY2YwZTcxNzU5NDA2MjBjYjM4OGYxNjE0MmMzNjJmN2FkY2I5MGE0Mzc2NThkZTZkZjUxMjA1OTFiZRcJPQM=: 00:22:19.708 12:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:03:Y2UyNjUyY2YwZTcxNzU5NDA2MjBjYjM4OGYxNjE0MmMzNjJmN2FkY2I5MGE0Mzc2NThkZTZkZjUxMjA1OTFiZRcJPQM=: 00:22:20.273 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:20.273 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:20.273 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:22:20.273 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.273 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.532 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.532 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:20.532 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:20.532 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:20.532 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:20.532 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:22:20.532 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:20.532 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:20.532 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:20.532 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:20.532 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:20.532 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:20.532 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.532 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.532 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.532 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:20.532 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:20.532 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:21.097 00:22:21.097 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:21.097 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:21.097 12:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.355 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.355 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:21.355 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.355 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.355 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.355 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:21.355 { 00:22:21.355 "auth": { 00:22:21.355 "dhgroup": "ffdhe8192", 00:22:21.355 "digest": "sha256", 00:22:21.355 "state": "completed" 00:22:21.355 }, 00:22:21.355 "cntlid": 41, 00:22:21.355 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:22:21.355 "listen_address": { 00:22:21.355 "adrfam": "IPv4", 00:22:21.355 "traddr": "10.0.0.3", 00:22:21.355 "trsvcid": "4420", 00:22:21.355 "trtype": "TCP" 00:22:21.355 }, 00:22:21.355 "peer_address": { 00:22:21.355 "adrfam": "IPv4", 00:22:21.355 "traddr": "10.0.0.1", 00:22:21.355 "trsvcid": "47734", 00:22:21.355 "trtype": "TCP" 00:22:21.355 }, 00:22:21.355 "qid": 0, 00:22:21.355 "state": "enabled", 00:22:21.355 "thread": "nvmf_tgt_poll_group_000" 00:22:21.355 } 00:22:21.355 ]' 00:22:21.355 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:21.355 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:21.355 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:21.613 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:21.613 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:21.613 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:21.613 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:21.613 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:21.613 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWI1NWY5YjdhYzZkMzBmZWJlZDRiMjEwNTk0N2MwYmJiYzVkYjIwMzdkNjRjMDgzTJxClg==: --dhchap-ctrl-secret DHHC-1:03:MGE5YzFjYzExNDY2MDk5ODZhMzJkZDBiOWE5ZTU5OWU5NGQwYWU4ZmIwY2I1MGI2ZDEyNzE0ZDBkZDFkZDYwNLfIQ3U=: 00:22:21.613 12:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:00:YWI1NWY5YjdhYzZkMzBmZWJlZDRiMjEwNTk0N2MwYmJiYzVkYjIwMzdkNjRjMDgzTJxClg==: --dhchap-ctrl-secret DHHC-1:03:MGE5YzFjYzExNDY2MDk5ODZhMzJkZDBiOWE5ZTU5OWU5NGQwYWU4ZmIwY2I1MGI2ZDEyNzE0ZDBkZDFkZDYwNLfIQ3U=: 00:22:22.545 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:22.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:22.545 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:22:22.545 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.545 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.545 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.545 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:22.545 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:22.545 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:22.545 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:22:22.545 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:22.545 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:22.545 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:22.545 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:22.545 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:22.545 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:22.545 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.545 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.545 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.545 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:22.545 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:22.545 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:23.109 00:22:23.109 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:23.109 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:23.109 12:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.366 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.366 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:23.366 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.366 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.366 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.366 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:23.366 { 00:22:23.366 "auth": { 00:22:23.366 "dhgroup": "ffdhe8192", 00:22:23.366 "digest": "sha256", 00:22:23.366 "state": "completed" 00:22:23.366 }, 00:22:23.366 "cntlid": 43, 00:22:23.366 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:22:23.366 "listen_address": { 00:22:23.366 "adrfam": "IPv4", 00:22:23.366 "traddr": "10.0.0.3", 00:22:23.366 "trsvcid": "4420", 00:22:23.366 "trtype": "TCP" 00:22:23.366 }, 00:22:23.366 "peer_address": { 00:22:23.366 "adrfam": "IPv4", 00:22:23.366 "traddr": "10.0.0.1", 00:22:23.366 "trsvcid": "47756", 00:22:23.366 "trtype": "TCP" 00:22:23.366 }, 00:22:23.366 "qid": 0, 00:22:23.366 "state": "enabled", 00:22:23.366 "thread": "nvmf_tgt_poll_group_000" 00:22:23.366 } 00:22:23.366 ]' 00:22:23.366 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:23.366 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:23.366 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:23.366 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:23.366 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:23.366 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:23.366 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:23.366 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:23.623 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmE3MzNkYWY0NmMyMWY0OTAzMGNhMzIzMWQxMGJmNzm5Zgcu: --dhchap-ctrl-secret DHHC-1:02:ZWVkMjdkYjYxYTVlODQ3MmQwOTg3NTQ2ZDdhYmI4M2JlY2FiN2U3ODUyZDEzMDRkpA1b6w==: 00:22:23.623 12:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:01:YmE3MzNkYWY0NmMyMWY0OTAzMGNhMzIzMWQxMGJmNzm5Zgcu: --dhchap-ctrl-secret DHHC-1:02:ZWVkMjdkYjYxYTVlODQ3MmQwOTg3NTQ2ZDdhYmI4M2JlY2FiN2U3ODUyZDEzMDRkpA1b6w==: 00:22:24.555 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:24.555 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:24.555 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:22:24.555 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.555 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.555 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.555 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:24.555 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:24.555 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:24.555 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:22:24.555 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:24.555 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:24.555 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:24.555 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:24.555 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:24.555 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:24.555 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.555 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.555 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.555 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:24.555 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:24.555 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:25.120 00:22:25.120 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:25.120 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:25.120 12:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:25.379 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:25.379 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:25.379 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.379 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.379 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.379 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:25.379 { 00:22:25.379 "auth": { 00:22:25.379 "dhgroup": "ffdhe8192", 00:22:25.379 "digest": "sha256", 00:22:25.379 "state": "completed" 00:22:25.379 }, 00:22:25.379 "cntlid": 45, 00:22:25.379 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:22:25.379 "listen_address": { 00:22:25.379 "adrfam": "IPv4", 00:22:25.379 "traddr": "10.0.0.3", 00:22:25.379 "trsvcid": "4420", 00:22:25.379 "trtype": "TCP" 00:22:25.379 }, 00:22:25.379 "peer_address": { 00:22:25.379 "adrfam": "IPv4", 00:22:25.379 "traddr": "10.0.0.1", 00:22:25.379 "trsvcid": "47772", 00:22:25.379 "trtype": "TCP" 00:22:25.379 }, 00:22:25.379 "qid": 0, 00:22:25.379 "state": "enabled", 00:22:25.379 "thread": "nvmf_tgt_poll_group_000" 00:22:25.379 } 00:22:25.379 ]' 00:22:25.379 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:25.379 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:25.379 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:25.379 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:25.379 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:25.637 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:25.637 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:25.637 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:25.637 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmYyZjQxZTFmY2VjMzVkMWYxOGFmMzgxYjhjN2I4NGE1NjZlNzkwNWE3NWE2OWI5FkTRXA==: --dhchap-ctrl-secret DHHC-1:01:ZmIwMGJiMjI4OTIxZDM3YzQ2NWVjMjY1NTlkNDQ2ZDYpEJql: 00:22:25.637 12:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:02:NmYyZjQxZTFmY2VjMzVkMWYxOGFmMzgxYjhjN2I4NGE1NjZlNzkwNWE3NWE2OWI5FkTRXA==: --dhchap-ctrl-secret DHHC-1:01:ZmIwMGJiMjI4OTIxZDM3YzQ2NWVjMjY1NTlkNDQ2ZDYpEJql: 00:22:26.571 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:26.571 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:26.571 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:22:26.571 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.571 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.571 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.571 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:26.571 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:26.571 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:26.571 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:22:26.571 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:26.571 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:26.571 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:26.571 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:26.571 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:26.571 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key3 00:22:26.571 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.571 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.571 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.571 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:26.571 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:26.571 12:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:27.137 00:22:27.137 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:27.137 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:27.137 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:27.395 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.395 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:27.395 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.395 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.395 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.395 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:27.395 { 00:22:27.395 "auth": { 00:22:27.395 "dhgroup": "ffdhe8192", 00:22:27.395 "digest": "sha256", 00:22:27.395 "state": "completed" 00:22:27.395 }, 00:22:27.395 "cntlid": 47, 00:22:27.395 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:22:27.395 "listen_address": { 00:22:27.395 "adrfam": "IPv4", 00:22:27.395 "traddr": "10.0.0.3", 00:22:27.395 "trsvcid": "4420", 00:22:27.395 "trtype": "TCP" 00:22:27.395 }, 00:22:27.395 "peer_address": { 00:22:27.395 "adrfam": "IPv4", 00:22:27.395 "traddr": "10.0.0.1", 00:22:27.395 "trsvcid": "47806", 00:22:27.395 "trtype": "TCP" 00:22:27.395 }, 00:22:27.395 "qid": 0, 00:22:27.395 "state": "enabled", 00:22:27.395 "thread": "nvmf_tgt_poll_group_000" 00:22:27.395 } 00:22:27.395 ]' 00:22:27.395 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:27.395 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:27.395 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:27.652 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:27.652 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:27.652 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:27.652 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:27.652 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:27.910 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2UyNjUyY2YwZTcxNzU5NDA2MjBjYjM4OGYxNjE0MmMzNjJmN2FkY2I5MGE0Mzc2NThkZTZkZjUxMjA1OTFiZRcJPQM=: 00:22:27.910 12:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:03:Y2UyNjUyY2YwZTcxNzU5NDA2MjBjYjM4OGYxNjE0MmMzNjJmN2FkY2I5MGE0Mzc2NThkZTZkZjUxMjA1OTFiZRcJPQM=: 00:22:28.577 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:28.577 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:28.577 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:22:28.577 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.577 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.577 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.577 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:22:28.577 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:28.577 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:28.577 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:28.577 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:28.577 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:22:28.577 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:28.577 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:28.577 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:28.577 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:28.577 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:28.577 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:28.577 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.577 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.577 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.577 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:28.577 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:28.577 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:28.834 00:22:28.834 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:28.834 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.834 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:29.093 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.093 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:29.093 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.093 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.093 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.093 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:29.093 { 00:22:29.093 "auth": { 00:22:29.093 "dhgroup": "null", 00:22:29.093 "digest": "sha384", 00:22:29.093 "state": "completed" 00:22:29.093 }, 00:22:29.093 "cntlid": 49, 00:22:29.093 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:22:29.093 "listen_address": { 00:22:29.093 "adrfam": "IPv4", 00:22:29.093 "traddr": "10.0.0.3", 00:22:29.093 "trsvcid": "4420", 00:22:29.093 "trtype": "TCP" 00:22:29.093 }, 00:22:29.093 "peer_address": { 00:22:29.093 "adrfam": "IPv4", 00:22:29.093 "traddr": "10.0.0.1", 00:22:29.093 "trsvcid": "45196", 00:22:29.093 "trtype": "TCP" 00:22:29.093 }, 00:22:29.093 "qid": 0, 00:22:29.093 "state": "enabled", 00:22:29.093 "thread": "nvmf_tgt_poll_group_000" 00:22:29.093 } 00:22:29.093 ]' 00:22:29.093 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:29.093 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:29.093 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:29.350 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:29.350 12:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:29.350 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:29.350 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:29.351 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:29.608 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWI1NWY5YjdhYzZkMzBmZWJlZDRiMjEwNTk0N2MwYmJiYzVkYjIwMzdkNjRjMDgzTJxClg==: --dhchap-ctrl-secret DHHC-1:03:MGE5YzFjYzExNDY2MDk5ODZhMzJkZDBiOWE5ZTU5OWU5NGQwYWU4ZmIwY2I1MGI2ZDEyNzE0ZDBkZDFkZDYwNLfIQ3U=: 00:22:29.608 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:00:YWI1NWY5YjdhYzZkMzBmZWJlZDRiMjEwNTk0N2MwYmJiYzVkYjIwMzdkNjRjMDgzTJxClg==: --dhchap-ctrl-secret DHHC-1:03:MGE5YzFjYzExNDY2MDk5ODZhMzJkZDBiOWE5ZTU5OWU5NGQwYWU4ZmIwY2I1MGI2ZDEyNzE0ZDBkZDFkZDYwNLfIQ3U=: 00:22:30.174 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:30.174 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:30.174 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:22:30.174 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.174 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.174 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.174 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:30.174 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:30.174 12:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:30.432 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:22:30.432 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:30.432 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:30.432 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:30.432 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:30.432 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:30.432 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:30.432 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.432 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.432 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.432 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:30.432 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:30.432 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:30.690 00:22:30.690 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:30.690 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.690 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:30.947 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.947 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:30.947 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.947 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.947 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.947 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:30.948 { 00:22:30.948 "auth": { 00:22:30.948 "dhgroup": "null", 00:22:30.948 "digest": "sha384", 00:22:30.948 "state": "completed" 00:22:30.948 }, 00:22:30.948 "cntlid": 51, 00:22:30.948 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:22:30.948 "listen_address": { 00:22:30.948 "adrfam": "IPv4", 00:22:30.948 "traddr": "10.0.0.3", 00:22:30.948 "trsvcid": "4420", 00:22:30.948 "trtype": "TCP" 00:22:30.948 }, 00:22:30.948 "peer_address": { 00:22:30.948 "adrfam": "IPv4", 00:22:30.948 "traddr": "10.0.0.1", 00:22:30.948 "trsvcid": "45224", 00:22:30.948 "trtype": "TCP" 00:22:30.948 }, 00:22:30.948 "qid": 0, 00:22:30.948 "state": "enabled", 00:22:30.948 "thread": "nvmf_tgt_poll_group_000" 00:22:30.948 } 00:22:30.948 ]' 00:22:30.948 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:30.948 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:30.948 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:30.948 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:30.948 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:30.948 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:30.948 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:30.948 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:31.206 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmE3MzNkYWY0NmMyMWY0OTAzMGNhMzIzMWQxMGJmNzm5Zgcu: --dhchap-ctrl-secret DHHC-1:02:ZWVkMjdkYjYxYTVlODQ3MmQwOTg3NTQ2ZDdhYmI4M2JlY2FiN2U3ODUyZDEzMDRkpA1b6w==: 00:22:31.206 12:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:01:YmE3MzNkYWY0NmMyMWY0OTAzMGNhMzIzMWQxMGJmNzm5Zgcu: --dhchap-ctrl-secret DHHC-1:02:ZWVkMjdkYjYxYTVlODQ3MmQwOTg3NTQ2ZDdhYmI4M2JlY2FiN2U3ODUyZDEzMDRkpA1b6w==: 00:22:31.771 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:31.771 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:31.771 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:22:31.771 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.771 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.771 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.771 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:31.771 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:31.771 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:32.029 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:22:32.029 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:32.029 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:32.029 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:32.029 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:32.029 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:32.029 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:32.029 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.029 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.029 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.029 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:32.029 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:32.029 12:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:32.286 00:22:32.286 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:32.286 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:32.286 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.543 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.543 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:32.543 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.543 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.543 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.543 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:32.543 { 00:22:32.543 "auth": { 00:22:32.543 "dhgroup": "null", 00:22:32.543 "digest": "sha384", 00:22:32.543 "state": "completed" 00:22:32.543 }, 00:22:32.543 "cntlid": 53, 00:22:32.543 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:22:32.543 "listen_address": { 00:22:32.543 "adrfam": "IPv4", 00:22:32.543 "traddr": "10.0.0.3", 00:22:32.543 "trsvcid": "4420", 00:22:32.543 "trtype": "TCP" 00:22:32.543 }, 00:22:32.543 "peer_address": { 00:22:32.543 "adrfam": "IPv4", 00:22:32.543 "traddr": "10.0.0.1", 00:22:32.543 "trsvcid": "45262", 00:22:32.543 "trtype": "TCP" 00:22:32.543 }, 00:22:32.543 "qid": 0, 00:22:32.543 "state": "enabled", 00:22:32.543 "thread": "nvmf_tgt_poll_group_000" 00:22:32.543 } 00:22:32.543 ]' 00:22:32.543 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:32.543 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:32.543 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:32.543 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:32.543 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:32.543 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:32.543 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:32.543 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:32.800 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmYyZjQxZTFmY2VjMzVkMWYxOGFmMzgxYjhjN2I4NGE1NjZlNzkwNWE3NWE2OWI5FkTRXA==: --dhchap-ctrl-secret DHHC-1:01:ZmIwMGJiMjI4OTIxZDM3YzQ2NWVjMjY1NTlkNDQ2ZDYpEJql: 00:22:32.800 12:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:02:NmYyZjQxZTFmY2VjMzVkMWYxOGFmMzgxYjhjN2I4NGE1NjZlNzkwNWE3NWE2OWI5FkTRXA==: --dhchap-ctrl-secret DHHC-1:01:ZmIwMGJiMjI4OTIxZDM3YzQ2NWVjMjY1NTlkNDQ2ZDYpEJql: 00:22:33.366 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:33.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:33.366 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:22:33.366 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.366 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.366 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.366 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:33.366 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:33.366 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:33.624 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:22:33.624 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:33.624 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:33.624 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:33.624 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:33.624 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:33.624 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key3 00:22:33.624 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.624 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.624 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.624 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:33.624 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:33.624 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:33.882 00:22:33.882 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:33.882 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:33.882 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:34.140 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.140 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:34.140 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.140 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.140 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.140 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:34.140 { 00:22:34.140 "auth": { 00:22:34.140 "dhgroup": "null", 00:22:34.140 "digest": "sha384", 00:22:34.140 "state": "completed" 00:22:34.140 }, 00:22:34.140 "cntlid": 55, 00:22:34.140 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:22:34.140 "listen_address": { 00:22:34.140 "adrfam": "IPv4", 00:22:34.140 "traddr": "10.0.0.3", 00:22:34.140 "trsvcid": "4420", 00:22:34.140 "trtype": "TCP" 00:22:34.140 }, 00:22:34.140 "peer_address": { 00:22:34.140 "adrfam": "IPv4", 00:22:34.140 "traddr": "10.0.0.1", 00:22:34.140 "trsvcid": "45272", 00:22:34.140 "trtype": "TCP" 00:22:34.140 }, 00:22:34.140 "qid": 0, 00:22:34.140 "state": "enabled", 00:22:34.140 "thread": "nvmf_tgt_poll_group_000" 00:22:34.140 } 00:22:34.140 ]' 00:22:34.140 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:34.140 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:34.140 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:34.140 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:34.140 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:34.140 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:34.140 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:34.140 12:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:34.398 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2UyNjUyY2YwZTcxNzU5NDA2MjBjYjM4OGYxNjE0MmMzNjJmN2FkY2I5MGE0Mzc2NThkZTZkZjUxMjA1OTFiZRcJPQM=: 00:22:34.398 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:03:Y2UyNjUyY2YwZTcxNzU5NDA2MjBjYjM4OGYxNjE0MmMzNjJmN2FkY2I5MGE0Mzc2NThkZTZkZjUxMjA1OTFiZRcJPQM=: 00:22:34.961 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:34.961 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:34.961 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:22:34.961 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.961 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.961 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.961 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:34.961 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:34.961 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:34.961 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:35.218 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:22:35.218 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:35.218 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:35.218 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:35.218 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:35.218 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:35.218 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:35.218 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.218 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.218 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.218 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:35.218 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:35.218 12:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:35.476 00:22:35.476 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:35.476 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:35.476 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:35.734 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.734 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:35.734 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.734 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.734 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.734 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:35.734 { 00:22:35.734 "auth": { 00:22:35.734 "dhgroup": "ffdhe2048", 00:22:35.734 "digest": "sha384", 00:22:35.734 "state": "completed" 00:22:35.734 }, 00:22:35.734 "cntlid": 57, 00:22:35.734 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:22:35.734 "listen_address": { 00:22:35.734 "adrfam": "IPv4", 00:22:35.734 "traddr": "10.0.0.3", 00:22:35.734 "trsvcid": "4420", 00:22:35.734 "trtype": "TCP" 00:22:35.734 }, 00:22:35.734 "peer_address": { 00:22:35.734 "adrfam": "IPv4", 00:22:35.734 "traddr": "10.0.0.1", 00:22:35.734 "trsvcid": "45310", 00:22:35.734 "trtype": "TCP" 00:22:35.734 }, 00:22:35.734 "qid": 0, 00:22:35.734 "state": "enabled", 00:22:35.734 "thread": "nvmf_tgt_poll_group_000" 00:22:35.734 } 00:22:35.734 ]' 00:22:35.734 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:35.734 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:35.734 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:35.734 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:35.735 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:35.735 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:35.735 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:35.735 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:35.993 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWI1NWY5YjdhYzZkMzBmZWJlZDRiMjEwNTk0N2MwYmJiYzVkYjIwMzdkNjRjMDgzTJxClg==: --dhchap-ctrl-secret DHHC-1:03:MGE5YzFjYzExNDY2MDk5ODZhMzJkZDBiOWE5ZTU5OWU5NGQwYWU4ZmIwY2I1MGI2ZDEyNzE0ZDBkZDFkZDYwNLfIQ3U=: 00:22:35.993 12:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:00:YWI1NWY5YjdhYzZkMzBmZWJlZDRiMjEwNTk0N2MwYmJiYzVkYjIwMzdkNjRjMDgzTJxClg==: --dhchap-ctrl-secret DHHC-1:03:MGE5YzFjYzExNDY2MDk5ODZhMzJkZDBiOWE5ZTU5OWU5NGQwYWU4ZmIwY2I1MGI2ZDEyNzE0ZDBkZDFkZDYwNLfIQ3U=: 00:22:36.574 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:36.574 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:36.574 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:22:36.574 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.574 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.574 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.574 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:36.574 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:36.574 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:36.880 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:22:36.880 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:36.880 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:36.880 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:36.880 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:36.880 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:36.880 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:36.880 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.880 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.880 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.880 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:36.880 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:36.880 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:37.138 00:22:37.138 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:37.138 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:37.138 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:37.138 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.138 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:37.138 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.138 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.138 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.138 12:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:37.138 { 00:22:37.138 "auth": { 00:22:37.138 "dhgroup": "ffdhe2048", 00:22:37.138 "digest": "sha384", 00:22:37.138 "state": "completed" 00:22:37.138 }, 00:22:37.138 "cntlid": 59, 00:22:37.138 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:22:37.138 "listen_address": { 00:22:37.138 "adrfam": "IPv4", 00:22:37.138 "traddr": "10.0.0.3", 00:22:37.138 "trsvcid": "4420", 00:22:37.138 "trtype": "TCP" 00:22:37.138 }, 00:22:37.138 "peer_address": { 00:22:37.138 "adrfam": "IPv4", 00:22:37.138 "traddr": "10.0.0.1", 00:22:37.138 "trsvcid": "45330", 00:22:37.138 "trtype": "TCP" 00:22:37.138 }, 00:22:37.138 "qid": 0, 00:22:37.138 "state": "enabled", 00:22:37.138 "thread": "nvmf_tgt_poll_group_000" 00:22:37.138 } 00:22:37.139 ]' 00:22:37.139 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:37.397 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:37.397 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:37.397 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:37.397 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:37.397 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:37.397 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:37.397 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:37.654 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmE3MzNkYWY0NmMyMWY0OTAzMGNhMzIzMWQxMGJmNzm5Zgcu: --dhchap-ctrl-secret DHHC-1:02:ZWVkMjdkYjYxYTVlODQ3MmQwOTg3NTQ2ZDdhYmI4M2JlY2FiN2U3ODUyZDEzMDRkpA1b6w==: 00:22:37.654 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:01:YmE3MzNkYWY0NmMyMWY0OTAzMGNhMzIzMWQxMGJmNzm5Zgcu: --dhchap-ctrl-secret DHHC-1:02:ZWVkMjdkYjYxYTVlODQ3MmQwOTg3NTQ2ZDdhYmI4M2JlY2FiN2U3ODUyZDEzMDRkpA1b6w==: 00:22:38.218 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:38.218 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:38.218 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:22:38.218 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.218 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.218 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.218 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:38.218 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:38.218 12:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:38.476 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:22:38.476 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:38.476 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:38.476 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:38.476 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:38.476 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:38.476 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:38.476 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.476 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.476 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.476 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:38.476 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:38.476 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:38.734 00:22:38.734 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:38.734 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:38.734 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:38.734 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.734 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:38.734 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.734 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.992 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.992 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:38.992 { 00:22:38.992 "auth": { 00:22:38.992 "dhgroup": "ffdhe2048", 00:22:38.992 "digest": "sha384", 00:22:38.992 "state": "completed" 00:22:38.992 }, 00:22:38.992 "cntlid": 61, 00:22:38.992 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:22:38.992 "listen_address": { 00:22:38.992 "adrfam": "IPv4", 00:22:38.992 "traddr": "10.0.0.3", 00:22:38.992 "trsvcid": "4420", 00:22:38.992 "trtype": "TCP" 00:22:38.992 }, 00:22:38.992 "peer_address": { 00:22:38.992 "adrfam": "IPv4", 00:22:38.992 "traddr": "10.0.0.1", 00:22:38.992 "trsvcid": "45348", 00:22:38.992 "trtype": "TCP" 00:22:38.992 }, 00:22:38.992 "qid": 0, 00:22:38.992 "state": "enabled", 00:22:38.992 "thread": "nvmf_tgt_poll_group_000" 00:22:38.992 } 00:22:38.992 ]' 00:22:38.992 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:38.992 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:38.992 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:38.992 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:38.992 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:38.992 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:38.992 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:38.992 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:39.250 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmYyZjQxZTFmY2VjMzVkMWYxOGFmMzgxYjhjN2I4NGE1NjZlNzkwNWE3NWE2OWI5FkTRXA==: --dhchap-ctrl-secret DHHC-1:01:ZmIwMGJiMjI4OTIxZDM3YzQ2NWVjMjY1NTlkNDQ2ZDYpEJql: 00:22:39.250 12:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:02:NmYyZjQxZTFmY2VjMzVkMWYxOGFmMzgxYjhjN2I4NGE1NjZlNzkwNWE3NWE2OWI5FkTRXA==: --dhchap-ctrl-secret DHHC-1:01:ZmIwMGJiMjI4OTIxZDM3YzQ2NWVjMjY1NTlkNDQ2ZDYpEJql: 00:22:39.815 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:39.815 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:39.815 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:22:39.815 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.815 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.815 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.815 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:39.815 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:39.815 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:40.074 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:22:40.074 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:40.074 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:40.074 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:40.074 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:40.074 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:40.074 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key3 00:22:40.074 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.074 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.074 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.074 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:40.074 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:40.074 12:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:40.332 00:22:40.332 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:40.332 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:40.332 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:40.590 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.590 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:40.590 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.590 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.590 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.590 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:40.590 { 00:22:40.590 "auth": { 00:22:40.590 "dhgroup": "ffdhe2048", 00:22:40.590 "digest": "sha384", 00:22:40.590 "state": "completed" 00:22:40.590 }, 00:22:40.590 "cntlid": 63, 00:22:40.590 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:22:40.591 "listen_address": { 00:22:40.591 "adrfam": "IPv4", 00:22:40.591 "traddr": "10.0.0.3", 00:22:40.591 "trsvcid": "4420", 00:22:40.591 "trtype": "TCP" 00:22:40.591 }, 00:22:40.591 "peer_address": { 00:22:40.591 "adrfam": "IPv4", 00:22:40.591 "traddr": "10.0.0.1", 00:22:40.591 "trsvcid": "40550", 00:22:40.591 "trtype": "TCP" 00:22:40.591 }, 00:22:40.591 "qid": 0, 00:22:40.591 "state": "enabled", 00:22:40.591 "thread": "nvmf_tgt_poll_group_000" 00:22:40.591 } 00:22:40.591 ]' 00:22:40.591 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:40.591 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:40.591 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:40.591 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:40.591 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:40.591 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:40.591 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:40.591 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:40.848 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2UyNjUyY2YwZTcxNzU5NDA2MjBjYjM4OGYxNjE0MmMzNjJmN2FkY2I5MGE0Mzc2NThkZTZkZjUxMjA1OTFiZRcJPQM=: 00:22:40.848 12:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:03:Y2UyNjUyY2YwZTcxNzU5NDA2MjBjYjM4OGYxNjE0MmMzNjJmN2FkY2I5MGE0Mzc2NThkZTZkZjUxMjA1OTFiZRcJPQM=: 00:22:41.416 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:41.416 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:41.416 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:22:41.416 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.416 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.416 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.416 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:41.416 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:41.416 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:41.416 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:41.717 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:22:41.717 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:41.717 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:41.717 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:41.717 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:41.717 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:41.717 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:41.717 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.717 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.717 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.717 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:41.717 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:41.717 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:41.975 00:22:41.975 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:41.975 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:41.975 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:42.233 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.233 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:42.233 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.233 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.233 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.233 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:42.233 { 00:22:42.233 "auth": { 00:22:42.233 "dhgroup": "ffdhe3072", 00:22:42.233 "digest": "sha384", 00:22:42.233 "state": "completed" 00:22:42.233 }, 00:22:42.233 "cntlid": 65, 00:22:42.233 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:22:42.233 "listen_address": { 00:22:42.233 "adrfam": "IPv4", 00:22:42.233 "traddr": "10.0.0.3", 00:22:42.233 "trsvcid": "4420", 00:22:42.233 "trtype": "TCP" 00:22:42.233 }, 00:22:42.233 "peer_address": { 00:22:42.233 "adrfam": "IPv4", 00:22:42.233 "traddr": "10.0.0.1", 00:22:42.233 "trsvcid": "40568", 00:22:42.233 "trtype": "TCP" 00:22:42.233 }, 00:22:42.233 "qid": 0, 00:22:42.233 "state": "enabled", 00:22:42.233 "thread": "nvmf_tgt_poll_group_000" 00:22:42.233 } 00:22:42.233 ]' 00:22:42.233 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:42.233 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:42.233 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:42.233 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:42.233 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:42.233 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:42.233 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:42.233 12:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:42.491 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWI1NWY5YjdhYzZkMzBmZWJlZDRiMjEwNTk0N2MwYmJiYzVkYjIwMzdkNjRjMDgzTJxClg==: --dhchap-ctrl-secret DHHC-1:03:MGE5YzFjYzExNDY2MDk5ODZhMzJkZDBiOWE5ZTU5OWU5NGQwYWU4ZmIwY2I1MGI2ZDEyNzE0ZDBkZDFkZDYwNLfIQ3U=: 00:22:42.491 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:00:YWI1NWY5YjdhYzZkMzBmZWJlZDRiMjEwNTk0N2MwYmJiYzVkYjIwMzdkNjRjMDgzTJxClg==: --dhchap-ctrl-secret DHHC-1:03:MGE5YzFjYzExNDY2MDk5ODZhMzJkZDBiOWE5ZTU5OWU5NGQwYWU4ZmIwY2I1MGI2ZDEyNzE0ZDBkZDFkZDYwNLfIQ3U=: 00:22:43.058 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:43.058 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:43.058 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:22:43.058 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.058 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.058 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.058 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:43.058 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:43.058 12:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:43.316 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:22:43.316 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:43.316 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:43.316 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:43.316 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:43.316 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:43.316 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.316 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.316 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.316 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.316 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.316 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.316 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:43.573 00:22:43.573 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:43.573 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:43.573 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:43.830 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.830 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:43.830 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.830 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.830 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.830 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:43.830 { 00:22:43.830 "auth": { 00:22:43.830 "dhgroup": "ffdhe3072", 00:22:43.830 "digest": "sha384", 00:22:43.830 "state": "completed" 00:22:43.830 }, 00:22:43.830 "cntlid": 67, 00:22:43.830 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:22:43.830 "listen_address": { 00:22:43.830 "adrfam": "IPv4", 00:22:43.830 "traddr": "10.0.0.3", 00:22:43.830 "trsvcid": "4420", 00:22:43.830 "trtype": "TCP" 00:22:43.830 }, 00:22:43.830 "peer_address": { 00:22:43.830 "adrfam": "IPv4", 00:22:43.830 "traddr": "10.0.0.1", 00:22:43.830 "trsvcid": "40584", 00:22:43.830 "trtype": "TCP" 00:22:43.830 }, 00:22:43.830 "qid": 0, 00:22:43.830 "state": "enabled", 00:22:43.830 "thread": "nvmf_tgt_poll_group_000" 00:22:43.830 } 00:22:43.830 ]' 00:22:43.830 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:43.830 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:43.830 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:43.830 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:43.830 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:43.830 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:43.830 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:43.830 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:44.089 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmE3MzNkYWY0NmMyMWY0OTAzMGNhMzIzMWQxMGJmNzm5Zgcu: --dhchap-ctrl-secret DHHC-1:02:ZWVkMjdkYjYxYTVlODQ3MmQwOTg3NTQ2ZDdhYmI4M2JlY2FiN2U3ODUyZDEzMDRkpA1b6w==: 00:22:44.089 12:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:01:YmE3MzNkYWY0NmMyMWY0OTAzMGNhMzIzMWQxMGJmNzm5Zgcu: --dhchap-ctrl-secret DHHC-1:02:ZWVkMjdkYjYxYTVlODQ3MmQwOTg3NTQ2ZDdhYmI4M2JlY2FiN2U3ODUyZDEzMDRkpA1b6w==: 00:22:44.656 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:44.656 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:44.656 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:22:44.656 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.656 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.657 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.657 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:44.657 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:44.657 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:44.916 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:22:44.916 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:44.916 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:44.916 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:44.916 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:44.916 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:44.916 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:44.916 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.916 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.916 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.916 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:44.916 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:44.916 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:45.175 00:22:45.175 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:45.175 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:45.175 12:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:45.433 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.433 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:45.433 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.434 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.434 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.434 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:45.434 { 00:22:45.434 "auth": { 00:22:45.434 "dhgroup": "ffdhe3072", 00:22:45.434 "digest": "sha384", 00:22:45.434 "state": "completed" 00:22:45.434 }, 00:22:45.434 "cntlid": 69, 00:22:45.434 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:22:45.434 "listen_address": { 00:22:45.434 "adrfam": "IPv4", 00:22:45.434 "traddr": "10.0.0.3", 00:22:45.434 "trsvcid": "4420", 00:22:45.434 "trtype": "TCP" 00:22:45.434 }, 00:22:45.434 "peer_address": { 00:22:45.434 "adrfam": "IPv4", 00:22:45.434 "traddr": "10.0.0.1", 00:22:45.434 "trsvcid": "40622", 00:22:45.434 "trtype": "TCP" 00:22:45.434 }, 00:22:45.434 "qid": 0, 00:22:45.434 "state": "enabled", 00:22:45.434 "thread": "nvmf_tgt_poll_group_000" 00:22:45.434 } 00:22:45.434 ]' 00:22:45.434 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:45.434 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:45.434 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:45.434 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:45.434 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:45.434 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:45.434 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:45.434 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:45.692 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmYyZjQxZTFmY2VjMzVkMWYxOGFmMzgxYjhjN2I4NGE1NjZlNzkwNWE3NWE2OWI5FkTRXA==: --dhchap-ctrl-secret DHHC-1:01:ZmIwMGJiMjI4OTIxZDM3YzQ2NWVjMjY1NTlkNDQ2ZDYpEJql: 00:22:45.692 12:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:02:NmYyZjQxZTFmY2VjMzVkMWYxOGFmMzgxYjhjN2I4NGE1NjZlNzkwNWE3NWE2OWI5FkTRXA==: --dhchap-ctrl-secret DHHC-1:01:ZmIwMGJiMjI4OTIxZDM3YzQ2NWVjMjY1NTlkNDQ2ZDYpEJql: 00:22:46.258 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:46.258 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:46.258 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:22:46.259 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.259 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.259 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.259 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:46.259 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:46.259 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:46.517 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:22:46.517 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:46.517 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:46.517 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:46.517 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:46.517 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:46.517 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key3 00:22:46.517 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.517 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.517 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.517 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:46.517 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:46.517 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:46.775 00:22:46.775 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:46.775 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:46.775 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:47.033 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.033 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:47.033 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.033 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.033 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.033 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:47.033 { 00:22:47.033 "auth": { 00:22:47.033 "dhgroup": "ffdhe3072", 00:22:47.033 "digest": "sha384", 00:22:47.033 "state": "completed" 00:22:47.033 }, 00:22:47.033 "cntlid": 71, 00:22:47.033 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:22:47.033 "listen_address": { 00:22:47.033 "adrfam": "IPv4", 00:22:47.033 "traddr": "10.0.0.3", 00:22:47.033 "trsvcid": "4420", 00:22:47.033 "trtype": "TCP" 00:22:47.033 }, 00:22:47.033 "peer_address": { 00:22:47.033 "adrfam": "IPv4", 00:22:47.033 "traddr": "10.0.0.1", 00:22:47.033 "trsvcid": "40652", 00:22:47.033 "trtype": "TCP" 00:22:47.033 }, 00:22:47.033 "qid": 0, 00:22:47.033 "state": "enabled", 00:22:47.033 "thread": "nvmf_tgt_poll_group_000" 00:22:47.033 } 00:22:47.033 ]' 00:22:47.033 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:47.033 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:47.033 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:47.033 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:47.033 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:47.291 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:47.291 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:47.291 12:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:47.291 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2UyNjUyY2YwZTcxNzU5NDA2MjBjYjM4OGYxNjE0MmMzNjJmN2FkY2I5MGE0Mzc2NThkZTZkZjUxMjA1OTFiZRcJPQM=: 00:22:47.291 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:03:Y2UyNjUyY2YwZTcxNzU5NDA2MjBjYjM4OGYxNjE0MmMzNjJmN2FkY2I5MGE0Mzc2NThkZTZkZjUxMjA1OTFiZRcJPQM=: 00:22:47.857 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:47.857 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:47.857 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:22:47.857 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.857 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.857 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.857 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:47.857 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:47.857 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:47.857 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:48.126 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:22:48.126 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:48.126 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:48.126 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:48.126 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:48.126 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:48.126 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:48.126 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.126 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.126 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.126 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:48.126 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:48.126 12:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:48.384 00:22:48.384 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:48.384 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:48.384 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:48.641 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.641 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:48.641 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.641 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.641 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.641 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:48.641 { 00:22:48.641 "auth": { 00:22:48.641 "dhgroup": "ffdhe4096", 00:22:48.641 "digest": "sha384", 00:22:48.641 "state": "completed" 00:22:48.641 }, 00:22:48.641 "cntlid": 73, 00:22:48.641 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:22:48.641 "listen_address": { 00:22:48.641 "adrfam": "IPv4", 00:22:48.641 "traddr": "10.0.0.3", 00:22:48.641 "trsvcid": "4420", 00:22:48.641 "trtype": "TCP" 00:22:48.641 }, 00:22:48.641 "peer_address": { 00:22:48.641 "adrfam": "IPv4", 00:22:48.641 "traddr": "10.0.0.1", 00:22:48.641 "trsvcid": "40684", 00:22:48.641 "trtype": "TCP" 00:22:48.641 }, 00:22:48.641 "qid": 0, 00:22:48.641 "state": "enabled", 00:22:48.641 "thread": "nvmf_tgt_poll_group_000" 00:22:48.641 } 00:22:48.641 ]' 00:22:48.641 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:48.641 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:48.641 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:48.899 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:48.899 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:48.899 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:48.899 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:48.899 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:49.157 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWI1NWY5YjdhYzZkMzBmZWJlZDRiMjEwNTk0N2MwYmJiYzVkYjIwMzdkNjRjMDgzTJxClg==: --dhchap-ctrl-secret DHHC-1:03:MGE5YzFjYzExNDY2MDk5ODZhMzJkZDBiOWE5ZTU5OWU5NGQwYWU4ZmIwY2I1MGI2ZDEyNzE0ZDBkZDFkZDYwNLfIQ3U=: 00:22:49.157 12:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:00:YWI1NWY5YjdhYzZkMzBmZWJlZDRiMjEwNTk0N2MwYmJiYzVkYjIwMzdkNjRjMDgzTJxClg==: --dhchap-ctrl-secret DHHC-1:03:MGE5YzFjYzExNDY2MDk5ODZhMzJkZDBiOWE5ZTU5OWU5NGQwYWU4ZmIwY2I1MGI2ZDEyNzE0ZDBkZDFkZDYwNLfIQ3U=: 00:22:49.722 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:49.722 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:49.722 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:22:49.722 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.722 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.722 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.722 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:49.722 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:49.722 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:49.722 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:22:49.722 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:49.722 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:49.722 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:49.722 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:49.722 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:49.722 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:49.722 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.722 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.979 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.979 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:49.979 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:49.979 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:50.235 00:22:50.235 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:50.235 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:50.235 12:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:50.492 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.492 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:50.492 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.492 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.492 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.492 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:50.492 { 00:22:50.492 "auth": { 00:22:50.492 "dhgroup": "ffdhe4096", 00:22:50.492 "digest": "sha384", 00:22:50.492 "state": "completed" 00:22:50.492 }, 00:22:50.492 "cntlid": 75, 00:22:50.492 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:22:50.492 "listen_address": { 00:22:50.492 "adrfam": "IPv4", 00:22:50.492 "traddr": "10.0.0.3", 00:22:50.492 "trsvcid": "4420", 00:22:50.492 "trtype": "TCP" 00:22:50.492 }, 00:22:50.492 "peer_address": { 00:22:50.492 "adrfam": "IPv4", 00:22:50.492 "traddr": "10.0.0.1", 00:22:50.492 "trsvcid": "58568", 00:22:50.492 "trtype": "TCP" 00:22:50.492 }, 00:22:50.492 "qid": 0, 00:22:50.492 "state": "enabled", 00:22:50.492 "thread": "nvmf_tgt_poll_group_000" 00:22:50.492 } 00:22:50.492 ]' 00:22:50.492 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:50.492 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:50.492 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:50.492 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:50.492 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:50.492 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:50.492 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:50.492 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:50.799 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmE3MzNkYWY0NmMyMWY0OTAzMGNhMzIzMWQxMGJmNzm5Zgcu: --dhchap-ctrl-secret DHHC-1:02:ZWVkMjdkYjYxYTVlODQ3MmQwOTg3NTQ2ZDdhYmI4M2JlY2FiN2U3ODUyZDEzMDRkpA1b6w==: 00:22:50.799 12:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:01:YmE3MzNkYWY0NmMyMWY0OTAzMGNhMzIzMWQxMGJmNzm5Zgcu: --dhchap-ctrl-secret DHHC-1:02:ZWVkMjdkYjYxYTVlODQ3MmQwOTg3NTQ2ZDdhYmI4M2JlY2FiN2U3ODUyZDEzMDRkpA1b6w==: 00:22:51.411 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:51.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:51.411 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:22:51.411 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.411 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.412 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.412 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:51.412 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:51.412 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:51.412 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:22:51.412 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:51.412 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:51.412 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:51.412 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:51.412 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:51.412 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:51.412 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.412 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.412 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.412 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:51.412 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:51.412 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:51.976 00:22:51.976 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:51.976 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:51.976 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:51.976 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.976 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:51.976 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.976 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.976 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.976 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:51.976 { 00:22:51.976 "auth": { 00:22:51.976 "dhgroup": "ffdhe4096", 00:22:51.976 "digest": "sha384", 00:22:51.976 "state": "completed" 00:22:51.976 }, 00:22:51.976 "cntlid": 77, 00:22:51.976 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:22:51.976 "listen_address": { 00:22:51.976 "adrfam": "IPv4", 00:22:51.976 "traddr": "10.0.0.3", 00:22:51.976 "trsvcid": "4420", 00:22:51.976 "trtype": "TCP" 00:22:51.976 }, 00:22:51.976 "peer_address": { 00:22:51.976 "adrfam": "IPv4", 00:22:51.976 "traddr": "10.0.0.1", 00:22:51.976 "trsvcid": "58590", 00:22:51.976 "trtype": "TCP" 00:22:51.976 }, 00:22:51.976 "qid": 0, 00:22:51.976 "state": "enabled", 00:22:51.976 "thread": "nvmf_tgt_poll_group_000" 00:22:51.976 } 00:22:51.976 ]' 00:22:51.977 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:51.977 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:51.977 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:51.977 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:51.977 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:52.233 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:52.233 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:52.233 12:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:52.489 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmYyZjQxZTFmY2VjMzVkMWYxOGFmMzgxYjhjN2I4NGE1NjZlNzkwNWE3NWE2OWI5FkTRXA==: --dhchap-ctrl-secret DHHC-1:01:ZmIwMGJiMjI4OTIxZDM3YzQ2NWVjMjY1NTlkNDQ2ZDYpEJql: 00:22:52.489 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:02:NmYyZjQxZTFmY2VjMzVkMWYxOGFmMzgxYjhjN2I4NGE1NjZlNzkwNWE3NWE2OWI5FkTRXA==: --dhchap-ctrl-secret DHHC-1:01:ZmIwMGJiMjI4OTIxZDM3YzQ2NWVjMjY1NTlkNDQ2ZDYpEJql: 00:22:53.053 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:53.053 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:53.053 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:22:53.053 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.053 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.053 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.053 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:53.053 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:53.053 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:53.053 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:22:53.053 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:53.053 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:53.053 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:53.053 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:53.053 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:53.053 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key3 00:22:53.053 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.053 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.053 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.053 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:53.053 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:53.053 12:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:53.618 00:22:53.618 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:53.618 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:53.618 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:53.618 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:53.618 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:53.618 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.618 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.618 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.618 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:53.618 { 00:22:53.618 "auth": { 00:22:53.618 "dhgroup": "ffdhe4096", 00:22:53.618 "digest": "sha384", 00:22:53.618 "state": "completed" 00:22:53.618 }, 00:22:53.618 "cntlid": 79, 00:22:53.618 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:22:53.618 "listen_address": { 00:22:53.618 "adrfam": "IPv4", 00:22:53.619 "traddr": "10.0.0.3", 00:22:53.619 "trsvcid": "4420", 00:22:53.619 "trtype": "TCP" 00:22:53.619 }, 00:22:53.619 "peer_address": { 00:22:53.619 "adrfam": "IPv4", 00:22:53.619 "traddr": "10.0.0.1", 00:22:53.619 "trsvcid": "58614", 00:22:53.619 "trtype": "TCP" 00:22:53.619 }, 00:22:53.619 "qid": 0, 00:22:53.619 "state": "enabled", 00:22:53.619 "thread": "nvmf_tgt_poll_group_000" 00:22:53.619 } 00:22:53.619 ]' 00:22:53.619 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:53.619 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:53.619 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:53.877 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:53.877 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:53.877 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:53.877 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:53.877 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:54.134 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2UyNjUyY2YwZTcxNzU5NDA2MjBjYjM4OGYxNjE0MmMzNjJmN2FkY2I5MGE0Mzc2NThkZTZkZjUxMjA1OTFiZRcJPQM=: 00:22:54.134 12:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:03:Y2UyNjUyY2YwZTcxNzU5NDA2MjBjYjM4OGYxNjE0MmMzNjJmN2FkY2I5MGE0Mzc2NThkZTZkZjUxMjA1OTFiZRcJPQM=: 00:22:54.728 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:54.728 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:54.728 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:22:54.728 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.728 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.728 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.728 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:54.728 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:54.728 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:54.728 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:54.728 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:22:54.728 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:54.728 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:54.728 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:54.728 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:54.728 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:54.728 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:54.728 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.728 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.728 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.728 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:54.728 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:54.728 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:55.295 00:22:55.295 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:55.295 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:55.295 12:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:55.554 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.554 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:55.554 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.554 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.554 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.554 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:55.554 { 00:22:55.554 "auth": { 00:22:55.554 "dhgroup": "ffdhe6144", 00:22:55.554 "digest": "sha384", 00:22:55.554 "state": "completed" 00:22:55.554 }, 00:22:55.554 "cntlid": 81, 00:22:55.554 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:22:55.554 "listen_address": { 00:22:55.554 "adrfam": "IPv4", 00:22:55.554 "traddr": "10.0.0.3", 00:22:55.554 "trsvcid": "4420", 00:22:55.554 "trtype": "TCP" 00:22:55.554 }, 00:22:55.554 "peer_address": { 00:22:55.554 "adrfam": "IPv4", 00:22:55.554 "traddr": "10.0.0.1", 00:22:55.554 "trsvcid": "58656", 00:22:55.554 "trtype": "TCP" 00:22:55.554 }, 00:22:55.554 "qid": 0, 00:22:55.554 "state": "enabled", 00:22:55.554 "thread": "nvmf_tgt_poll_group_000" 00:22:55.554 } 00:22:55.554 ]' 00:22:55.554 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:55.554 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:55.554 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:55.554 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:55.554 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:55.554 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:55.554 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:55.554 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:55.812 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWI1NWY5YjdhYzZkMzBmZWJlZDRiMjEwNTk0N2MwYmJiYzVkYjIwMzdkNjRjMDgzTJxClg==: --dhchap-ctrl-secret DHHC-1:03:MGE5YzFjYzExNDY2MDk5ODZhMzJkZDBiOWE5ZTU5OWU5NGQwYWU4ZmIwY2I1MGI2ZDEyNzE0ZDBkZDFkZDYwNLfIQ3U=: 00:22:55.812 12:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:00:YWI1NWY5YjdhYzZkMzBmZWJlZDRiMjEwNTk0N2MwYmJiYzVkYjIwMzdkNjRjMDgzTJxClg==: --dhchap-ctrl-secret DHHC-1:03:MGE5YzFjYzExNDY2MDk5ODZhMzJkZDBiOWE5ZTU5OWU5NGQwYWU4ZmIwY2I1MGI2ZDEyNzE0ZDBkZDFkZDYwNLfIQ3U=: 00:22:56.379 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:56.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:56.379 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:22:56.379 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.379 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.379 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.379 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:56.379 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:56.379 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:56.637 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:22:56.637 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:56.637 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:56.637 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:56.637 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:56.637 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:56.637 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:56.637 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.637 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.637 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.637 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:56.637 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:56.637 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:56.894 00:22:56.894 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:56.894 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:56.894 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:57.152 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.152 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:57.152 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.152 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.152 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.152 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:57.152 { 00:22:57.152 "auth": { 00:22:57.152 "dhgroup": "ffdhe6144", 00:22:57.152 "digest": "sha384", 00:22:57.152 "state": "completed" 00:22:57.152 }, 00:22:57.152 "cntlid": 83, 00:22:57.152 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:22:57.152 "listen_address": { 00:22:57.152 "adrfam": "IPv4", 00:22:57.152 "traddr": "10.0.0.3", 00:22:57.152 "trsvcid": "4420", 00:22:57.152 "trtype": "TCP" 00:22:57.152 }, 00:22:57.152 "peer_address": { 00:22:57.152 "adrfam": "IPv4", 00:22:57.152 "traddr": "10.0.0.1", 00:22:57.152 "trsvcid": "58672", 00:22:57.152 "trtype": "TCP" 00:22:57.152 }, 00:22:57.152 "qid": 0, 00:22:57.152 "state": "enabled", 00:22:57.152 "thread": "nvmf_tgt_poll_group_000" 00:22:57.152 } 00:22:57.152 ]' 00:22:57.152 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:57.152 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:57.152 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:57.152 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:57.152 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:57.152 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:57.152 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:57.152 12:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:57.410 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmE3MzNkYWY0NmMyMWY0OTAzMGNhMzIzMWQxMGJmNzm5Zgcu: --dhchap-ctrl-secret DHHC-1:02:ZWVkMjdkYjYxYTVlODQ3MmQwOTg3NTQ2ZDdhYmI4M2JlY2FiN2U3ODUyZDEzMDRkpA1b6w==: 00:22:57.410 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:01:YmE3MzNkYWY0NmMyMWY0OTAzMGNhMzIzMWQxMGJmNzm5Zgcu: --dhchap-ctrl-secret DHHC-1:02:ZWVkMjdkYjYxYTVlODQ3MmQwOTg3NTQ2ZDdhYmI4M2JlY2FiN2U3ODUyZDEzMDRkpA1b6w==: 00:22:57.974 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:57.974 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:57.974 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:22:57.974 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.974 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.974 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.974 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:57.974 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:57.974 12:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:58.232 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:22:58.232 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:58.232 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:58.232 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:58.232 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:58.232 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:58.232 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:58.232 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.232 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.232 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.232 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:58.232 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:58.232 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:58.797 00:22:58.797 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:58.797 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:58.797 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:58.797 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:58.797 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:58.797 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.797 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.797 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.797 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:58.797 { 00:22:58.797 "auth": { 00:22:58.797 "dhgroup": "ffdhe6144", 00:22:58.797 "digest": "sha384", 00:22:58.797 "state": "completed" 00:22:58.797 }, 00:22:58.797 "cntlid": 85, 00:22:58.797 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:22:58.797 "listen_address": { 00:22:58.797 "adrfam": "IPv4", 00:22:58.797 "traddr": "10.0.0.3", 00:22:58.797 "trsvcid": "4420", 00:22:58.797 "trtype": "TCP" 00:22:58.797 }, 00:22:58.797 "peer_address": { 00:22:58.797 "adrfam": "IPv4", 00:22:58.797 "traddr": "10.0.0.1", 00:22:58.797 "trsvcid": "58700", 00:22:58.797 "trtype": "TCP" 00:22:58.797 }, 00:22:58.797 "qid": 0, 00:22:58.797 "state": "enabled", 00:22:58.797 "thread": "nvmf_tgt_poll_group_000" 00:22:58.797 } 00:22:58.797 ]' 00:22:58.797 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:59.055 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:59.055 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:59.055 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:59.055 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:59.055 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:59.055 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:59.055 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:59.313 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmYyZjQxZTFmY2VjMzVkMWYxOGFmMzgxYjhjN2I4NGE1NjZlNzkwNWE3NWE2OWI5FkTRXA==: --dhchap-ctrl-secret DHHC-1:01:ZmIwMGJiMjI4OTIxZDM3YzQ2NWVjMjY1NTlkNDQ2ZDYpEJql: 00:22:59.313 12:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:02:NmYyZjQxZTFmY2VjMzVkMWYxOGFmMzgxYjhjN2I4NGE1NjZlNzkwNWE3NWE2OWI5FkTRXA==: --dhchap-ctrl-secret DHHC-1:01:ZmIwMGJiMjI4OTIxZDM3YzQ2NWVjMjY1NTlkNDQ2ZDYpEJql: 00:22:59.879 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:59.879 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:59.879 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:22:59.879 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.879 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.879 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.879 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:59.879 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:59.879 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:00.137 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:23:00.137 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:00.137 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:00.137 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:00.137 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:00.137 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:00.137 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key3 00:23:00.137 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.137 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.137 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.137 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:00.137 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:00.137 12:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:00.394 00:23:00.394 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:00.394 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:00.394 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:00.652 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.652 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:00.652 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.652 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.652 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.652 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:00.652 { 00:23:00.652 "auth": { 00:23:00.652 "dhgroup": "ffdhe6144", 00:23:00.652 "digest": "sha384", 00:23:00.652 "state": "completed" 00:23:00.652 }, 00:23:00.652 "cntlid": 87, 00:23:00.652 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:23:00.652 "listen_address": { 00:23:00.652 "adrfam": "IPv4", 00:23:00.652 "traddr": "10.0.0.3", 00:23:00.652 "trsvcid": "4420", 00:23:00.652 "trtype": "TCP" 00:23:00.652 }, 00:23:00.652 "peer_address": { 00:23:00.652 "adrfam": "IPv4", 00:23:00.652 "traddr": "10.0.0.1", 00:23:00.652 "trsvcid": "42208", 00:23:00.652 "trtype": "TCP" 00:23:00.652 }, 00:23:00.652 "qid": 0, 00:23:00.652 "state": "enabled", 00:23:00.652 "thread": "nvmf_tgt_poll_group_000" 00:23:00.652 } 00:23:00.652 ]' 00:23:00.652 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:00.652 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:00.652 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:00.652 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:00.652 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:00.652 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:00.652 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:00.652 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:00.912 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2UyNjUyY2YwZTcxNzU5NDA2MjBjYjM4OGYxNjE0MmMzNjJmN2FkY2I5MGE0Mzc2NThkZTZkZjUxMjA1OTFiZRcJPQM=: 00:23:00.912 12:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:03:Y2UyNjUyY2YwZTcxNzU5NDA2MjBjYjM4OGYxNjE0MmMzNjJmN2FkY2I5MGE0Mzc2NThkZTZkZjUxMjA1OTFiZRcJPQM=: 00:23:01.870 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:01.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:01.871 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:23:01.871 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.871 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.871 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.871 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:01.871 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:01.871 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:01.871 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:01.871 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:23:01.871 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:01.871 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:01.871 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:01.871 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:01.871 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:01.871 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:01.871 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.871 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.871 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.871 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:01.871 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:01.871 12:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:02.436 00:23:02.436 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:02.436 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:02.436 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:02.694 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.694 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:02.694 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.694 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.694 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.694 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:02.694 { 00:23:02.694 "auth": { 00:23:02.694 "dhgroup": "ffdhe8192", 00:23:02.694 "digest": "sha384", 00:23:02.694 "state": "completed" 00:23:02.694 }, 00:23:02.694 "cntlid": 89, 00:23:02.694 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:23:02.694 "listen_address": { 00:23:02.694 "adrfam": "IPv4", 00:23:02.694 "traddr": "10.0.0.3", 00:23:02.694 "trsvcid": "4420", 00:23:02.694 "trtype": "TCP" 00:23:02.694 }, 00:23:02.694 "peer_address": { 00:23:02.694 "adrfam": "IPv4", 00:23:02.694 "traddr": "10.0.0.1", 00:23:02.694 "trsvcid": "42236", 00:23:02.694 "trtype": "TCP" 00:23:02.694 }, 00:23:02.694 "qid": 0, 00:23:02.694 "state": "enabled", 00:23:02.694 "thread": "nvmf_tgt_poll_group_000" 00:23:02.694 } 00:23:02.694 ]' 00:23:02.694 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:02.694 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:02.694 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:02.694 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:02.694 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:02.694 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:02.694 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:02.694 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:02.952 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWI1NWY5YjdhYzZkMzBmZWJlZDRiMjEwNTk0N2MwYmJiYzVkYjIwMzdkNjRjMDgzTJxClg==: --dhchap-ctrl-secret DHHC-1:03:MGE5YzFjYzExNDY2MDk5ODZhMzJkZDBiOWE5ZTU5OWU5NGQwYWU4ZmIwY2I1MGI2ZDEyNzE0ZDBkZDFkZDYwNLfIQ3U=: 00:23:02.952 12:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:00:YWI1NWY5YjdhYzZkMzBmZWJlZDRiMjEwNTk0N2MwYmJiYzVkYjIwMzdkNjRjMDgzTJxClg==: --dhchap-ctrl-secret DHHC-1:03:MGE5YzFjYzExNDY2MDk5ODZhMzJkZDBiOWE5ZTU5OWU5NGQwYWU4ZmIwY2I1MGI2ZDEyNzE0ZDBkZDFkZDYwNLfIQ3U=: 00:23:03.517 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:03.517 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:03.517 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:23:03.518 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.518 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.518 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.518 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:03.518 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:03.518 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:03.775 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:23:03.775 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:03.775 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:03.775 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:03.775 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:03.775 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:03.775 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:03.775 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.775 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.775 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.775 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:03.775 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:03.775 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:04.341 00:23:04.341 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:04.341 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:04.341 12:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:04.341 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.341 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:04.341 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.341 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.599 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.599 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:04.599 { 00:23:04.599 "auth": { 00:23:04.599 "dhgroup": "ffdhe8192", 00:23:04.599 "digest": "sha384", 00:23:04.599 "state": "completed" 00:23:04.599 }, 00:23:04.599 "cntlid": 91, 00:23:04.599 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:23:04.599 "listen_address": { 00:23:04.599 "adrfam": "IPv4", 00:23:04.599 "traddr": "10.0.0.3", 00:23:04.599 "trsvcid": "4420", 00:23:04.599 "trtype": "TCP" 00:23:04.599 }, 00:23:04.599 "peer_address": { 00:23:04.599 "adrfam": "IPv4", 00:23:04.599 "traddr": "10.0.0.1", 00:23:04.599 "trsvcid": "42264", 00:23:04.599 "trtype": "TCP" 00:23:04.599 }, 00:23:04.599 "qid": 0, 00:23:04.599 "state": "enabled", 00:23:04.599 "thread": "nvmf_tgt_poll_group_000" 00:23:04.599 } 00:23:04.599 ]' 00:23:04.599 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:04.599 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:04.599 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:04.599 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:04.599 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:04.599 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:04.599 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:04.599 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:04.599 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmE3MzNkYWY0NmMyMWY0OTAzMGNhMzIzMWQxMGJmNzm5Zgcu: --dhchap-ctrl-secret DHHC-1:02:ZWVkMjdkYjYxYTVlODQ3MmQwOTg3NTQ2ZDdhYmI4M2JlY2FiN2U3ODUyZDEzMDRkpA1b6w==: 00:23:04.599 12:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:01:YmE3MzNkYWY0NmMyMWY0OTAzMGNhMzIzMWQxMGJmNzm5Zgcu: --dhchap-ctrl-secret DHHC-1:02:ZWVkMjdkYjYxYTVlODQ3MmQwOTg3NTQ2ZDdhYmI4M2JlY2FiN2U3ODUyZDEzMDRkpA1b6w==: 00:23:05.165 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:05.423 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:05.423 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:23:05.423 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.423 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.423 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.423 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:05.423 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:05.423 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:05.423 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:23:05.423 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:05.423 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:05.423 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:05.423 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:05.423 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:05.423 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:05.423 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.423 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.423 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.423 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:05.423 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:05.424 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:05.988 00:23:05.988 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:05.988 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:05.988 12:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:06.246 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.246 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:06.246 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.246 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.246 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.246 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:06.246 { 00:23:06.246 "auth": { 00:23:06.246 "dhgroup": "ffdhe8192", 00:23:06.246 "digest": "sha384", 00:23:06.246 "state": "completed" 00:23:06.246 }, 00:23:06.246 "cntlid": 93, 00:23:06.246 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:23:06.246 "listen_address": { 00:23:06.246 "adrfam": "IPv4", 00:23:06.246 "traddr": "10.0.0.3", 00:23:06.246 "trsvcid": "4420", 00:23:06.246 "trtype": "TCP" 00:23:06.246 }, 00:23:06.246 "peer_address": { 00:23:06.246 "adrfam": "IPv4", 00:23:06.246 "traddr": "10.0.0.1", 00:23:06.247 "trsvcid": "42296", 00:23:06.247 "trtype": "TCP" 00:23:06.247 }, 00:23:06.247 "qid": 0, 00:23:06.247 "state": "enabled", 00:23:06.247 "thread": "nvmf_tgt_poll_group_000" 00:23:06.247 } 00:23:06.247 ]' 00:23:06.247 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:06.247 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:06.247 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:06.247 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:06.247 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:06.247 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:06.247 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:06.247 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:06.504 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmYyZjQxZTFmY2VjMzVkMWYxOGFmMzgxYjhjN2I4NGE1NjZlNzkwNWE3NWE2OWI5FkTRXA==: --dhchap-ctrl-secret DHHC-1:01:ZmIwMGJiMjI4OTIxZDM3YzQ2NWVjMjY1NTlkNDQ2ZDYpEJql: 00:23:06.504 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:02:NmYyZjQxZTFmY2VjMzVkMWYxOGFmMzgxYjhjN2I4NGE1NjZlNzkwNWE3NWE2OWI5FkTRXA==: --dhchap-ctrl-secret DHHC-1:01:ZmIwMGJiMjI4OTIxZDM3YzQ2NWVjMjY1NTlkNDQ2ZDYpEJql: 00:23:07.068 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:07.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:07.068 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:23:07.068 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.068 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.068 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.068 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:07.068 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:07.068 12:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:07.326 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:23:07.326 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:07.326 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:23:07.326 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:07.326 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:07.326 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:07.326 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key3 00:23:07.326 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.326 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.326 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.326 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:07.326 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:07.326 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:07.891 00:23:07.891 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:07.891 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:07.891 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:08.148 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.148 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:08.148 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.148 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.148 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.148 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:08.148 { 00:23:08.148 "auth": { 00:23:08.148 "dhgroup": "ffdhe8192", 00:23:08.148 "digest": "sha384", 00:23:08.148 "state": "completed" 00:23:08.148 }, 00:23:08.148 "cntlid": 95, 00:23:08.148 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:23:08.148 "listen_address": { 00:23:08.148 "adrfam": "IPv4", 00:23:08.148 "traddr": "10.0.0.3", 00:23:08.148 "trsvcid": "4420", 00:23:08.148 "trtype": "TCP" 00:23:08.148 }, 00:23:08.148 "peer_address": { 00:23:08.148 "adrfam": "IPv4", 00:23:08.148 "traddr": "10.0.0.1", 00:23:08.148 "trsvcid": "42324", 00:23:08.148 "trtype": "TCP" 00:23:08.148 }, 00:23:08.148 "qid": 0, 00:23:08.148 "state": "enabled", 00:23:08.148 "thread": "nvmf_tgt_poll_group_000" 00:23:08.148 } 00:23:08.148 ]' 00:23:08.148 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:08.148 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:23:08.148 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:08.148 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:08.148 12:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:08.148 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:08.148 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:08.148 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:08.405 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2UyNjUyY2YwZTcxNzU5NDA2MjBjYjM4OGYxNjE0MmMzNjJmN2FkY2I5MGE0Mzc2NThkZTZkZjUxMjA1OTFiZRcJPQM=: 00:23:08.405 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:03:Y2UyNjUyY2YwZTcxNzU5NDA2MjBjYjM4OGYxNjE0MmMzNjJmN2FkY2I5MGE0Mzc2NThkZTZkZjUxMjA1OTFiZRcJPQM=: 00:23:08.970 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:08.970 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:08.970 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:23:08.970 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.970 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.970 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.970 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:23:08.970 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:08.970 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:08.970 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:08.970 12:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:09.228 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:23:09.228 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:09.228 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:09.228 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:09.228 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:09.228 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:09.228 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:09.228 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.228 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.228 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.228 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:09.228 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:09.228 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:09.487 00:23:09.487 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:09.487 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:09.487 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:09.745 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.745 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:09.745 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.745 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.745 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.745 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:09.745 { 00:23:09.745 "auth": { 00:23:09.745 "dhgroup": "null", 00:23:09.745 "digest": "sha512", 00:23:09.745 "state": "completed" 00:23:09.745 }, 00:23:09.745 "cntlid": 97, 00:23:09.745 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:23:09.745 "listen_address": { 00:23:09.745 "adrfam": "IPv4", 00:23:09.745 "traddr": "10.0.0.3", 00:23:09.745 "trsvcid": "4420", 00:23:09.745 "trtype": "TCP" 00:23:09.745 }, 00:23:09.745 "peer_address": { 00:23:09.745 "adrfam": "IPv4", 00:23:09.745 "traddr": "10.0.0.1", 00:23:09.745 "trsvcid": "42368", 00:23:09.745 "trtype": "TCP" 00:23:09.745 }, 00:23:09.745 "qid": 0, 00:23:09.745 "state": "enabled", 00:23:09.745 "thread": "nvmf_tgt_poll_group_000" 00:23:09.745 } 00:23:09.745 ]' 00:23:09.745 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:09.745 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:09.745 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:09.745 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:09.745 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:09.745 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:09.745 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:09.745 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:10.003 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWI1NWY5YjdhYzZkMzBmZWJlZDRiMjEwNTk0N2MwYmJiYzVkYjIwMzdkNjRjMDgzTJxClg==: --dhchap-ctrl-secret DHHC-1:03:MGE5YzFjYzExNDY2MDk5ODZhMzJkZDBiOWE5ZTU5OWU5NGQwYWU4ZmIwY2I1MGI2ZDEyNzE0ZDBkZDFkZDYwNLfIQ3U=: 00:23:10.003 12:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:00:YWI1NWY5YjdhYzZkMzBmZWJlZDRiMjEwNTk0N2MwYmJiYzVkYjIwMzdkNjRjMDgzTJxClg==: --dhchap-ctrl-secret DHHC-1:03:MGE5YzFjYzExNDY2MDk5ODZhMzJkZDBiOWE5ZTU5OWU5NGQwYWU4ZmIwY2I1MGI2ZDEyNzE0ZDBkZDFkZDYwNLfIQ3U=: 00:23:10.568 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:10.568 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:10.568 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:23:10.568 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.568 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.568 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.568 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:10.568 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:10.568 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:10.826 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:23:10.826 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:10.826 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:10.826 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:10.826 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:10.826 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:10.826 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:10.826 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.826 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.826 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.826 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:10.826 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:10.827 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:11.084 00:23:11.084 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:11.084 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:11.084 12:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:11.342 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.342 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:11.342 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.342 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.342 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.342 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:11.342 { 00:23:11.342 "auth": { 00:23:11.342 "dhgroup": "null", 00:23:11.342 "digest": "sha512", 00:23:11.342 "state": "completed" 00:23:11.342 }, 00:23:11.342 "cntlid": 99, 00:23:11.342 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:23:11.342 "listen_address": { 00:23:11.342 "adrfam": "IPv4", 00:23:11.342 "traddr": "10.0.0.3", 00:23:11.342 "trsvcid": "4420", 00:23:11.342 "trtype": "TCP" 00:23:11.342 }, 00:23:11.342 "peer_address": { 00:23:11.342 "adrfam": "IPv4", 00:23:11.342 "traddr": "10.0.0.1", 00:23:11.342 "trsvcid": "42378", 00:23:11.342 "trtype": "TCP" 00:23:11.342 }, 00:23:11.342 "qid": 0, 00:23:11.342 "state": "enabled", 00:23:11.342 "thread": "nvmf_tgt_poll_group_000" 00:23:11.342 } 00:23:11.342 ]' 00:23:11.342 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:11.342 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:11.342 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:11.342 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:11.342 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:11.342 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:11.342 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:11.342 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:11.600 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmE3MzNkYWY0NmMyMWY0OTAzMGNhMzIzMWQxMGJmNzm5Zgcu: --dhchap-ctrl-secret DHHC-1:02:ZWVkMjdkYjYxYTVlODQ3MmQwOTg3NTQ2ZDdhYmI4M2JlY2FiN2U3ODUyZDEzMDRkpA1b6w==: 00:23:11.600 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:01:YmE3MzNkYWY0NmMyMWY0OTAzMGNhMzIzMWQxMGJmNzm5Zgcu: --dhchap-ctrl-secret DHHC-1:02:ZWVkMjdkYjYxYTVlODQ3MmQwOTg3NTQ2ZDdhYmI4M2JlY2FiN2U3ODUyZDEzMDRkpA1b6w==: 00:23:12.166 12:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:12.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:12.166 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:23:12.166 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.166 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.166 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.166 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:12.166 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:12.166 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:12.423 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:23:12.424 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:12.424 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:12.424 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:12.424 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:12.424 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:12.424 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:12.424 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.424 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.424 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.424 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:12.424 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:12.424 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:12.682 00:23:12.682 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:12.682 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:12.682 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:12.939 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.939 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:12.939 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.939 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.939 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.939 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:12.939 { 00:23:12.939 "auth": { 00:23:12.939 "dhgroup": "null", 00:23:12.939 "digest": "sha512", 00:23:12.939 "state": "completed" 00:23:12.939 }, 00:23:12.939 "cntlid": 101, 00:23:12.939 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:23:12.939 "listen_address": { 00:23:12.939 "adrfam": "IPv4", 00:23:12.939 "traddr": "10.0.0.3", 00:23:12.939 "trsvcid": "4420", 00:23:12.939 "trtype": "TCP" 00:23:12.939 }, 00:23:12.939 "peer_address": { 00:23:12.939 "adrfam": "IPv4", 00:23:12.939 "traddr": "10.0.0.1", 00:23:12.939 "trsvcid": "42400", 00:23:12.939 "trtype": "TCP" 00:23:12.939 }, 00:23:12.939 "qid": 0, 00:23:12.939 "state": "enabled", 00:23:12.939 "thread": "nvmf_tgt_poll_group_000" 00:23:12.939 } 00:23:12.939 ]' 00:23:12.939 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:12.939 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:12.939 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:12.939 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:12.939 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:13.196 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:13.196 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:13.196 12:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:13.197 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmYyZjQxZTFmY2VjMzVkMWYxOGFmMzgxYjhjN2I4NGE1NjZlNzkwNWE3NWE2OWI5FkTRXA==: --dhchap-ctrl-secret DHHC-1:01:ZmIwMGJiMjI4OTIxZDM3YzQ2NWVjMjY1NTlkNDQ2ZDYpEJql: 00:23:13.197 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:02:NmYyZjQxZTFmY2VjMzVkMWYxOGFmMzgxYjhjN2I4NGE1NjZlNzkwNWE3NWE2OWI5FkTRXA==: --dhchap-ctrl-secret DHHC-1:01:ZmIwMGJiMjI4OTIxZDM3YzQ2NWVjMjY1NTlkNDQ2ZDYpEJql: 00:23:13.763 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:13.763 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:13.763 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:23:13.763 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.763 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.763 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.763 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:13.763 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:13.763 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:23:14.021 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:23:14.021 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:14.021 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:14.021 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:23:14.021 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:14.021 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:14.021 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key3 00:23:14.021 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.021 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.021 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.021 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:14.021 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:14.021 12:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:14.279 00:23:14.279 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:14.279 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:14.279 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:14.537 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.537 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:14.537 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.537 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.537 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.537 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:14.537 { 00:23:14.537 "auth": { 00:23:14.537 "dhgroup": "null", 00:23:14.537 "digest": "sha512", 00:23:14.537 "state": "completed" 00:23:14.537 }, 00:23:14.537 "cntlid": 103, 00:23:14.537 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:23:14.537 "listen_address": { 00:23:14.537 "adrfam": "IPv4", 00:23:14.537 "traddr": "10.0.0.3", 00:23:14.537 "trsvcid": "4420", 00:23:14.537 "trtype": "TCP" 00:23:14.537 }, 00:23:14.537 "peer_address": { 00:23:14.537 "adrfam": "IPv4", 00:23:14.537 "traddr": "10.0.0.1", 00:23:14.537 "trsvcid": "42442", 00:23:14.537 "trtype": "TCP" 00:23:14.537 }, 00:23:14.537 "qid": 0, 00:23:14.537 "state": "enabled", 00:23:14.537 "thread": "nvmf_tgt_poll_group_000" 00:23:14.537 } 00:23:14.537 ]' 00:23:14.537 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:14.537 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:14.537 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:14.537 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:23:14.537 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:14.795 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:14.795 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:14.795 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:14.795 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2UyNjUyY2YwZTcxNzU5NDA2MjBjYjM4OGYxNjE0MmMzNjJmN2FkY2I5MGE0Mzc2NThkZTZkZjUxMjA1OTFiZRcJPQM=: 00:23:14.795 12:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:03:Y2UyNjUyY2YwZTcxNzU5NDA2MjBjYjM4OGYxNjE0MmMzNjJmN2FkY2I5MGE0Mzc2NThkZTZkZjUxMjA1OTFiZRcJPQM=: 00:23:15.361 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:15.361 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:15.361 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:23:15.361 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.361 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.361 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.361 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:15.361 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:15.361 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:15.361 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:15.618 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:23:15.618 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:15.618 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:15.619 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:15.619 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:15.619 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:15.619 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:15.619 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.619 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.619 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.619 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:15.619 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:15.619 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:15.876 00:23:15.876 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:15.876 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:15.876 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:16.134 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.134 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:16.134 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.134 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.134 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.134 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:16.134 { 00:23:16.134 "auth": { 00:23:16.134 "dhgroup": "ffdhe2048", 00:23:16.134 "digest": "sha512", 00:23:16.134 "state": "completed" 00:23:16.134 }, 00:23:16.134 "cntlid": 105, 00:23:16.134 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:23:16.134 "listen_address": { 00:23:16.134 "adrfam": "IPv4", 00:23:16.134 "traddr": "10.0.0.3", 00:23:16.134 "trsvcid": "4420", 00:23:16.134 "trtype": "TCP" 00:23:16.134 }, 00:23:16.134 "peer_address": { 00:23:16.134 "adrfam": "IPv4", 00:23:16.134 "traddr": "10.0.0.1", 00:23:16.134 "trsvcid": "42474", 00:23:16.134 "trtype": "TCP" 00:23:16.134 }, 00:23:16.134 "qid": 0, 00:23:16.134 "state": "enabled", 00:23:16.134 "thread": "nvmf_tgt_poll_group_000" 00:23:16.134 } 00:23:16.134 ]' 00:23:16.134 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:16.134 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:16.134 12:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:16.391 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:16.391 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:16.391 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:16.391 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:16.391 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:16.649 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWI1NWY5YjdhYzZkMzBmZWJlZDRiMjEwNTk0N2MwYmJiYzVkYjIwMzdkNjRjMDgzTJxClg==: --dhchap-ctrl-secret DHHC-1:03:MGE5YzFjYzExNDY2MDk5ODZhMzJkZDBiOWE5ZTU5OWU5NGQwYWU4ZmIwY2I1MGI2ZDEyNzE0ZDBkZDFkZDYwNLfIQ3U=: 00:23:16.649 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:00:YWI1NWY5YjdhYzZkMzBmZWJlZDRiMjEwNTk0N2MwYmJiYzVkYjIwMzdkNjRjMDgzTJxClg==: --dhchap-ctrl-secret DHHC-1:03:MGE5YzFjYzExNDY2MDk5ODZhMzJkZDBiOWE5ZTU5OWU5NGQwYWU4ZmIwY2I1MGI2ZDEyNzE0ZDBkZDFkZDYwNLfIQ3U=: 00:23:17.214 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:17.214 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:17.214 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:23:17.214 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.214 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.214 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.214 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:17.214 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:17.214 12:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:17.214 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:23:17.214 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:17.214 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:17.214 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:17.214 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:17.214 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:17.214 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:17.214 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.214 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.214 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.214 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:17.214 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:17.214 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:17.471 00:23:17.471 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:17.471 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:17.471 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:17.729 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.729 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:17.729 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.729 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.729 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.729 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:17.729 { 00:23:17.729 "auth": { 00:23:17.729 "dhgroup": "ffdhe2048", 00:23:17.729 "digest": "sha512", 00:23:17.729 "state": "completed" 00:23:17.729 }, 00:23:17.729 "cntlid": 107, 00:23:17.729 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:23:17.729 "listen_address": { 00:23:17.729 "adrfam": "IPv4", 00:23:17.729 "traddr": "10.0.0.3", 00:23:17.729 "trsvcid": "4420", 00:23:17.729 "trtype": "TCP" 00:23:17.729 }, 00:23:17.729 "peer_address": { 00:23:17.729 "adrfam": "IPv4", 00:23:17.729 "traddr": "10.0.0.1", 00:23:17.729 "trsvcid": "42508", 00:23:17.729 "trtype": "TCP" 00:23:17.729 }, 00:23:17.729 "qid": 0, 00:23:17.729 "state": "enabled", 00:23:17.729 "thread": "nvmf_tgt_poll_group_000" 00:23:17.729 } 00:23:17.729 ]' 00:23:17.729 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:17.729 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:17.729 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:17.987 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:17.987 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:17.987 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:17.987 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:17.987 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:17.987 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmE3MzNkYWY0NmMyMWY0OTAzMGNhMzIzMWQxMGJmNzm5Zgcu: --dhchap-ctrl-secret DHHC-1:02:ZWVkMjdkYjYxYTVlODQ3MmQwOTg3NTQ2ZDdhYmI4M2JlY2FiN2U3ODUyZDEzMDRkpA1b6w==: 00:23:17.987 12:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:01:YmE3MzNkYWY0NmMyMWY0OTAzMGNhMzIzMWQxMGJmNzm5Zgcu: --dhchap-ctrl-secret DHHC-1:02:ZWVkMjdkYjYxYTVlODQ3MmQwOTg3NTQ2ZDdhYmI4M2JlY2FiN2U3ODUyZDEzMDRkpA1b6w==: 00:23:18.551 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:18.551 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:18.551 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:23:18.551 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.551 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.809 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.809 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:18.809 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:18.809 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:18.809 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:23:18.809 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:18.809 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:18.809 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:18.809 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:18.809 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:18.809 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:18.809 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.809 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.809 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.809 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:18.809 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:18.809 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:19.067 00:23:19.067 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:19.067 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:19.067 12:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:19.325 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.325 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:19.325 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.325 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.325 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.325 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:19.325 { 00:23:19.325 "auth": { 00:23:19.325 "dhgroup": "ffdhe2048", 00:23:19.325 "digest": "sha512", 00:23:19.325 "state": "completed" 00:23:19.325 }, 00:23:19.325 "cntlid": 109, 00:23:19.325 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:23:19.325 "listen_address": { 00:23:19.325 "adrfam": "IPv4", 00:23:19.325 "traddr": "10.0.0.3", 00:23:19.325 "trsvcid": "4420", 00:23:19.325 "trtype": "TCP" 00:23:19.325 }, 00:23:19.325 "peer_address": { 00:23:19.325 "adrfam": "IPv4", 00:23:19.325 "traddr": "10.0.0.1", 00:23:19.325 "trsvcid": "56216", 00:23:19.325 "trtype": "TCP" 00:23:19.325 }, 00:23:19.325 "qid": 0, 00:23:19.325 "state": "enabled", 00:23:19.325 "thread": "nvmf_tgt_poll_group_000" 00:23:19.325 } 00:23:19.325 ]' 00:23:19.325 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:19.325 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:19.325 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:19.583 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:19.583 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:19.583 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:19.583 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:19.583 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:19.841 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmYyZjQxZTFmY2VjMzVkMWYxOGFmMzgxYjhjN2I4NGE1NjZlNzkwNWE3NWE2OWI5FkTRXA==: --dhchap-ctrl-secret DHHC-1:01:ZmIwMGJiMjI4OTIxZDM3YzQ2NWVjMjY1NTlkNDQ2ZDYpEJql: 00:23:19.841 12:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:02:NmYyZjQxZTFmY2VjMzVkMWYxOGFmMzgxYjhjN2I4NGE1NjZlNzkwNWE3NWE2OWI5FkTRXA==: --dhchap-ctrl-secret DHHC-1:01:ZmIwMGJiMjI4OTIxZDM3YzQ2NWVjMjY1NTlkNDQ2ZDYpEJql: 00:23:20.407 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:20.407 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:20.407 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:23:20.407 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.407 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.407 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.407 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:20.407 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:20.407 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:20.407 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:23:20.407 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:20.407 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:20.407 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:20.407 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:20.407 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:20.407 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key3 00:23:20.407 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.407 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.407 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.407 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:20.407 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:20.407 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:20.971 00:23:20.971 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:20.972 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:20.972 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:20.972 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.972 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:20.972 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.972 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.972 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.972 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:20.972 { 00:23:20.972 "auth": { 00:23:20.972 "dhgroup": "ffdhe2048", 00:23:20.972 "digest": "sha512", 00:23:20.972 "state": "completed" 00:23:20.972 }, 00:23:20.972 "cntlid": 111, 00:23:20.972 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:23:20.972 "listen_address": { 00:23:20.972 "adrfam": "IPv4", 00:23:20.972 "traddr": "10.0.0.3", 00:23:20.972 "trsvcid": "4420", 00:23:20.972 "trtype": "TCP" 00:23:20.972 }, 00:23:20.972 "peer_address": { 00:23:20.972 "adrfam": "IPv4", 00:23:20.972 "traddr": "10.0.0.1", 00:23:20.972 "trsvcid": "56236", 00:23:20.972 "trtype": "TCP" 00:23:20.972 }, 00:23:20.972 "qid": 0, 00:23:20.972 "state": "enabled", 00:23:20.972 "thread": "nvmf_tgt_poll_group_000" 00:23:20.972 } 00:23:20.972 ]' 00:23:20.972 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:20.972 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:20.972 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:21.230 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:21.230 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:21.230 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:21.230 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:21.230 12:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:21.487 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2UyNjUyY2YwZTcxNzU5NDA2MjBjYjM4OGYxNjE0MmMzNjJmN2FkY2I5MGE0Mzc2NThkZTZkZjUxMjA1OTFiZRcJPQM=: 00:23:21.487 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:03:Y2UyNjUyY2YwZTcxNzU5NDA2MjBjYjM4OGYxNjE0MmMzNjJmN2FkY2I5MGE0Mzc2NThkZTZkZjUxMjA1OTFiZRcJPQM=: 00:23:22.053 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:22.053 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:22.053 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:23:22.053 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.053 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.053 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.053 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:22.053 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:22.053 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:22.053 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:22.053 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:23:22.053 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:22.053 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:22.053 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:22.053 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:22.053 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:22.053 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:22.053 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.053 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.053 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.053 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:22.053 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:22.053 12:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:22.618 00:23:22.618 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:22.618 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:22.618 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:22.618 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.618 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:22.618 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.619 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.619 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.619 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:22.619 { 00:23:22.619 "auth": { 00:23:22.619 "dhgroup": "ffdhe3072", 00:23:22.619 "digest": "sha512", 00:23:22.619 "state": "completed" 00:23:22.619 }, 00:23:22.619 "cntlid": 113, 00:23:22.619 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:23:22.619 "listen_address": { 00:23:22.619 "adrfam": "IPv4", 00:23:22.619 "traddr": "10.0.0.3", 00:23:22.619 "trsvcid": "4420", 00:23:22.619 "trtype": "TCP" 00:23:22.619 }, 00:23:22.619 "peer_address": { 00:23:22.619 "adrfam": "IPv4", 00:23:22.619 "traddr": "10.0.0.1", 00:23:22.619 "trsvcid": "56264", 00:23:22.619 "trtype": "TCP" 00:23:22.619 }, 00:23:22.619 "qid": 0, 00:23:22.619 "state": "enabled", 00:23:22.619 "thread": "nvmf_tgt_poll_group_000" 00:23:22.619 } 00:23:22.619 ]' 00:23:22.619 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:22.619 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:22.619 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:22.876 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:22.876 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:22.876 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:22.876 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:22.876 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:22.876 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWI1NWY5YjdhYzZkMzBmZWJlZDRiMjEwNTk0N2MwYmJiYzVkYjIwMzdkNjRjMDgzTJxClg==: --dhchap-ctrl-secret DHHC-1:03:MGE5YzFjYzExNDY2MDk5ODZhMzJkZDBiOWE5ZTU5OWU5NGQwYWU4ZmIwY2I1MGI2ZDEyNzE0ZDBkZDFkZDYwNLfIQ3U=: 00:23:22.876 12:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:00:YWI1NWY5YjdhYzZkMzBmZWJlZDRiMjEwNTk0N2MwYmJiYzVkYjIwMzdkNjRjMDgzTJxClg==: --dhchap-ctrl-secret DHHC-1:03:MGE5YzFjYzExNDY2MDk5ODZhMzJkZDBiOWE5ZTU5OWU5NGQwYWU4ZmIwY2I1MGI2ZDEyNzE0ZDBkZDFkZDYwNLfIQ3U=: 00:23:23.454 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:23.454 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:23.454 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:23:23.454 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.454 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.454 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.454 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:23.454 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:23.454 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:23.717 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:23:23.717 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:23.717 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:23.717 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:23.717 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:23.717 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:23.717 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:23.717 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.717 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.717 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.717 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:23.717 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:23.717 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:23.975 00:23:23.975 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:23.975 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:23.975 12:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:24.232 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.232 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:24.232 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.232 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.232 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.232 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:24.232 { 00:23:24.232 "auth": { 00:23:24.232 "dhgroup": "ffdhe3072", 00:23:24.232 "digest": "sha512", 00:23:24.232 "state": "completed" 00:23:24.232 }, 00:23:24.232 "cntlid": 115, 00:23:24.232 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:23:24.232 "listen_address": { 00:23:24.233 "adrfam": "IPv4", 00:23:24.233 "traddr": "10.0.0.3", 00:23:24.233 "trsvcid": "4420", 00:23:24.233 "trtype": "TCP" 00:23:24.233 }, 00:23:24.233 "peer_address": { 00:23:24.233 "adrfam": "IPv4", 00:23:24.233 "traddr": "10.0.0.1", 00:23:24.233 "trsvcid": "56308", 00:23:24.233 "trtype": "TCP" 00:23:24.233 }, 00:23:24.233 "qid": 0, 00:23:24.233 "state": "enabled", 00:23:24.233 "thread": "nvmf_tgt_poll_group_000" 00:23:24.233 } 00:23:24.233 ]' 00:23:24.233 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:24.233 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:24.233 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:24.489 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:24.489 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:24.489 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:24.489 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:24.489 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:24.747 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmE3MzNkYWY0NmMyMWY0OTAzMGNhMzIzMWQxMGJmNzm5Zgcu: --dhchap-ctrl-secret DHHC-1:02:ZWVkMjdkYjYxYTVlODQ3MmQwOTg3NTQ2ZDdhYmI4M2JlY2FiN2U3ODUyZDEzMDRkpA1b6w==: 00:23:24.747 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:01:YmE3MzNkYWY0NmMyMWY0OTAzMGNhMzIzMWQxMGJmNzm5Zgcu: --dhchap-ctrl-secret DHHC-1:02:ZWVkMjdkYjYxYTVlODQ3MmQwOTg3NTQ2ZDdhYmI4M2JlY2FiN2U3ODUyZDEzMDRkpA1b6w==: 00:23:25.313 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:25.313 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:25.313 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:23:25.313 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.313 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.313 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.313 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:25.313 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:25.313 12:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:25.313 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:23:25.313 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:25.313 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:25.313 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:25.313 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:25.313 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:25.313 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:25.313 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.313 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.313 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.313 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:25.313 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:25.313 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:25.571 00:23:25.829 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:25.829 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:25.829 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:25.829 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.829 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:25.829 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.829 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.829 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.829 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:25.829 { 00:23:25.829 "auth": { 00:23:25.829 "dhgroup": "ffdhe3072", 00:23:25.829 "digest": "sha512", 00:23:25.829 "state": "completed" 00:23:25.829 }, 00:23:25.829 "cntlid": 117, 00:23:25.829 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:23:25.829 "listen_address": { 00:23:25.829 "adrfam": "IPv4", 00:23:25.829 "traddr": "10.0.0.3", 00:23:25.829 "trsvcid": "4420", 00:23:25.829 "trtype": "TCP" 00:23:25.829 }, 00:23:25.829 "peer_address": { 00:23:25.829 "adrfam": "IPv4", 00:23:25.829 "traddr": "10.0.0.1", 00:23:25.829 "trsvcid": "56348", 00:23:25.829 "trtype": "TCP" 00:23:25.829 }, 00:23:25.829 "qid": 0, 00:23:25.829 "state": "enabled", 00:23:25.829 "thread": "nvmf_tgt_poll_group_000" 00:23:25.829 } 00:23:25.829 ]' 00:23:25.829 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:26.087 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:26.087 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:26.087 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:26.087 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:26.087 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:26.087 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:26.087 12:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:26.345 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmYyZjQxZTFmY2VjMzVkMWYxOGFmMzgxYjhjN2I4NGE1NjZlNzkwNWE3NWE2OWI5FkTRXA==: --dhchap-ctrl-secret DHHC-1:01:ZmIwMGJiMjI4OTIxZDM3YzQ2NWVjMjY1NTlkNDQ2ZDYpEJql: 00:23:26.345 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:02:NmYyZjQxZTFmY2VjMzVkMWYxOGFmMzgxYjhjN2I4NGE1NjZlNzkwNWE3NWE2OWI5FkTRXA==: --dhchap-ctrl-secret DHHC-1:01:ZmIwMGJiMjI4OTIxZDM3YzQ2NWVjMjY1NTlkNDQ2ZDYpEJql: 00:23:26.912 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:26.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:26.912 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:23:26.912 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.912 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:26.912 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.912 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:26.912 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:26.912 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:27.190 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:23:27.190 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:27.190 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:27.190 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:27.190 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:27.190 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:27.190 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key3 00:23:27.190 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.190 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.190 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.190 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:27.190 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:27.190 12:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:27.448 00:23:27.448 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:27.448 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:27.448 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:27.705 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.706 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:27.706 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.706 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.706 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.706 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:27.706 { 00:23:27.706 "auth": { 00:23:27.706 "dhgroup": "ffdhe3072", 00:23:27.706 "digest": "sha512", 00:23:27.706 "state": "completed" 00:23:27.706 }, 00:23:27.706 "cntlid": 119, 00:23:27.706 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:23:27.706 "listen_address": { 00:23:27.706 "adrfam": "IPv4", 00:23:27.706 "traddr": "10.0.0.3", 00:23:27.706 "trsvcid": "4420", 00:23:27.706 "trtype": "TCP" 00:23:27.706 }, 00:23:27.706 "peer_address": { 00:23:27.706 "adrfam": "IPv4", 00:23:27.706 "traddr": "10.0.0.1", 00:23:27.706 "trsvcid": "56388", 00:23:27.706 "trtype": "TCP" 00:23:27.706 }, 00:23:27.706 "qid": 0, 00:23:27.706 "state": "enabled", 00:23:27.706 "thread": "nvmf_tgt_poll_group_000" 00:23:27.706 } 00:23:27.706 ]' 00:23:27.706 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:27.706 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:27.706 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:27.706 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:27.706 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:27.706 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:27.706 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:27.706 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:27.963 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2UyNjUyY2YwZTcxNzU5NDA2MjBjYjM4OGYxNjE0MmMzNjJmN2FkY2I5MGE0Mzc2NThkZTZkZjUxMjA1OTFiZRcJPQM=: 00:23:27.963 12:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:03:Y2UyNjUyY2YwZTcxNzU5NDA2MjBjYjM4OGYxNjE0MmMzNjJmN2FkY2I5MGE0Mzc2NThkZTZkZjUxMjA1OTFiZRcJPQM=: 00:23:28.529 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:28.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:28.529 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:23:28.529 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.529 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:28.529 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.529 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:28.529 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:28.529 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:28.529 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:28.786 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:23:28.786 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:28.786 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:28.786 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:28.786 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:28.786 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:28.786 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:28.786 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.786 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:28.786 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.786 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:28.786 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:28.786 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:29.044 00:23:29.044 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:29.044 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:29.044 12:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:29.302 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.302 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:29.302 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.302 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.302 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.302 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:29.302 { 00:23:29.302 "auth": { 00:23:29.302 "dhgroup": "ffdhe4096", 00:23:29.302 "digest": "sha512", 00:23:29.302 "state": "completed" 00:23:29.302 }, 00:23:29.302 "cntlid": 121, 00:23:29.302 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:23:29.302 "listen_address": { 00:23:29.302 "adrfam": "IPv4", 00:23:29.302 "traddr": "10.0.0.3", 00:23:29.302 "trsvcid": "4420", 00:23:29.302 "trtype": "TCP" 00:23:29.302 }, 00:23:29.302 "peer_address": { 00:23:29.302 "adrfam": "IPv4", 00:23:29.302 "traddr": "10.0.0.1", 00:23:29.302 "trsvcid": "41182", 00:23:29.302 "trtype": "TCP" 00:23:29.302 }, 00:23:29.302 "qid": 0, 00:23:29.302 "state": "enabled", 00:23:29.302 "thread": "nvmf_tgt_poll_group_000" 00:23:29.302 } 00:23:29.302 ]' 00:23:29.302 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:29.302 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:29.302 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:29.302 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:29.302 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:29.302 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:29.302 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:29.302 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:29.560 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWI1NWY5YjdhYzZkMzBmZWJlZDRiMjEwNTk0N2MwYmJiYzVkYjIwMzdkNjRjMDgzTJxClg==: --dhchap-ctrl-secret DHHC-1:03:MGE5YzFjYzExNDY2MDk5ODZhMzJkZDBiOWE5ZTU5OWU5NGQwYWU4ZmIwY2I1MGI2ZDEyNzE0ZDBkZDFkZDYwNLfIQ3U=: 00:23:29.560 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:00:YWI1NWY5YjdhYzZkMzBmZWJlZDRiMjEwNTk0N2MwYmJiYzVkYjIwMzdkNjRjMDgzTJxClg==: --dhchap-ctrl-secret DHHC-1:03:MGE5YzFjYzExNDY2MDk5ODZhMzJkZDBiOWE5ZTU5OWU5NGQwYWU4ZmIwY2I1MGI2ZDEyNzE0ZDBkZDFkZDYwNLfIQ3U=: 00:23:30.124 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:30.124 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:30.124 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:23:30.124 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.124 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.124 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.124 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:30.124 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:30.124 12:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:30.382 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:23:30.382 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:30.382 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:30.382 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:30.382 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:30.382 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:30.382 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:30.382 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.382 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.382 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.382 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:30.382 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:30.382 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:30.639 00:23:30.897 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:30.897 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:30.897 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:31.155 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.155 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:31.155 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.155 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:31.155 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.155 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:31.155 { 00:23:31.155 "auth": { 00:23:31.155 "dhgroup": "ffdhe4096", 00:23:31.155 "digest": "sha512", 00:23:31.155 "state": "completed" 00:23:31.155 }, 00:23:31.155 "cntlid": 123, 00:23:31.155 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:23:31.155 "listen_address": { 00:23:31.155 "adrfam": "IPv4", 00:23:31.155 "traddr": "10.0.0.3", 00:23:31.155 "trsvcid": "4420", 00:23:31.155 "trtype": "TCP" 00:23:31.155 }, 00:23:31.155 "peer_address": { 00:23:31.155 "adrfam": "IPv4", 00:23:31.155 "traddr": "10.0.0.1", 00:23:31.155 "trsvcid": "41210", 00:23:31.155 "trtype": "TCP" 00:23:31.155 }, 00:23:31.155 "qid": 0, 00:23:31.155 "state": "enabled", 00:23:31.155 "thread": "nvmf_tgt_poll_group_000" 00:23:31.155 } 00:23:31.155 ]' 00:23:31.155 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:31.155 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:31.155 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:31.155 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:31.155 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:31.155 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:31.155 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:31.155 12:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:31.413 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmE3MzNkYWY0NmMyMWY0OTAzMGNhMzIzMWQxMGJmNzm5Zgcu: --dhchap-ctrl-secret DHHC-1:02:ZWVkMjdkYjYxYTVlODQ3MmQwOTg3NTQ2ZDdhYmI4M2JlY2FiN2U3ODUyZDEzMDRkpA1b6w==: 00:23:31.413 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:01:YmE3MzNkYWY0NmMyMWY0OTAzMGNhMzIzMWQxMGJmNzm5Zgcu: --dhchap-ctrl-secret DHHC-1:02:ZWVkMjdkYjYxYTVlODQ3MmQwOTg3NTQ2ZDdhYmI4M2JlY2FiN2U3ODUyZDEzMDRkpA1b6w==: 00:23:32.007 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:32.007 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:32.007 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:23:32.007 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.007 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:32.007 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.007 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:32.007 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:32.007 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:32.265 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:23:32.265 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:32.265 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:32.265 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:32.265 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:32.265 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:32.265 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:32.265 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.265 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:32.265 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.265 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:32.265 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:32.265 12:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:32.521 00:23:32.521 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:32.522 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:32.522 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:32.779 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.779 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:32.779 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.779 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:32.779 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.779 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:32.779 { 00:23:32.779 "auth": { 00:23:32.779 "dhgroup": "ffdhe4096", 00:23:32.779 "digest": "sha512", 00:23:32.779 "state": "completed" 00:23:32.779 }, 00:23:32.779 "cntlid": 125, 00:23:32.779 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:23:32.779 "listen_address": { 00:23:32.779 "adrfam": "IPv4", 00:23:32.779 "traddr": "10.0.0.3", 00:23:32.779 "trsvcid": "4420", 00:23:32.779 "trtype": "TCP" 00:23:32.779 }, 00:23:32.779 "peer_address": { 00:23:32.779 "adrfam": "IPv4", 00:23:32.779 "traddr": "10.0.0.1", 00:23:32.779 "trsvcid": "41230", 00:23:32.779 "trtype": "TCP" 00:23:32.779 }, 00:23:32.779 "qid": 0, 00:23:32.779 "state": "enabled", 00:23:32.779 "thread": "nvmf_tgt_poll_group_000" 00:23:32.779 } 00:23:32.779 ]' 00:23:32.779 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:32.779 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:32.779 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:32.779 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:32.779 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:32.779 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:32.779 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:32.779 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:33.037 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmYyZjQxZTFmY2VjMzVkMWYxOGFmMzgxYjhjN2I4NGE1NjZlNzkwNWE3NWE2OWI5FkTRXA==: --dhchap-ctrl-secret DHHC-1:01:ZmIwMGJiMjI4OTIxZDM3YzQ2NWVjMjY1NTlkNDQ2ZDYpEJql: 00:23:33.037 12:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:02:NmYyZjQxZTFmY2VjMzVkMWYxOGFmMzgxYjhjN2I4NGE1NjZlNzkwNWE3NWE2OWI5FkTRXA==: --dhchap-ctrl-secret DHHC-1:01:ZmIwMGJiMjI4OTIxZDM3YzQ2NWVjMjY1NTlkNDQ2ZDYpEJql: 00:23:33.603 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:33.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:33.603 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:23:33.603 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.603 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.603 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.603 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:33.603 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:33.603 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:33.860 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:23:33.860 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:33.860 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:33.860 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:33.860 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:33.860 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:33.860 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key3 00:23:33.860 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.860 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.860 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.860 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:33.860 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:33.860 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:34.117 00:23:34.117 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:34.117 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:34.117 12:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:34.374 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.375 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:34.375 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:34.375 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:34.375 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:34.375 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:34.375 { 00:23:34.375 "auth": { 00:23:34.375 "dhgroup": "ffdhe4096", 00:23:34.375 "digest": "sha512", 00:23:34.375 "state": "completed" 00:23:34.375 }, 00:23:34.375 "cntlid": 127, 00:23:34.375 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:23:34.375 "listen_address": { 00:23:34.375 "adrfam": "IPv4", 00:23:34.375 "traddr": "10.0.0.3", 00:23:34.375 "trsvcid": "4420", 00:23:34.375 "trtype": "TCP" 00:23:34.375 }, 00:23:34.375 "peer_address": { 00:23:34.375 "adrfam": "IPv4", 00:23:34.375 "traddr": "10.0.0.1", 00:23:34.375 "trsvcid": "41252", 00:23:34.375 "trtype": "TCP" 00:23:34.375 }, 00:23:34.375 "qid": 0, 00:23:34.375 "state": "enabled", 00:23:34.375 "thread": "nvmf_tgt_poll_group_000" 00:23:34.375 } 00:23:34.375 ]' 00:23:34.375 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:34.375 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:34.375 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:34.632 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:34.632 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:34.632 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:34.632 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:34.632 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:34.632 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2UyNjUyY2YwZTcxNzU5NDA2MjBjYjM4OGYxNjE0MmMzNjJmN2FkY2I5MGE0Mzc2NThkZTZkZjUxMjA1OTFiZRcJPQM=: 00:23:34.632 12:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:03:Y2UyNjUyY2YwZTcxNzU5NDA2MjBjYjM4OGYxNjE0MmMzNjJmN2FkY2I5MGE0Mzc2NThkZTZkZjUxMjA1OTFiZRcJPQM=: 00:23:35.197 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:35.455 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:35.456 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:23:35.456 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.456 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.456 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.456 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:35.456 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:35.456 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:35.456 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:35.456 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:23:35.456 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:35.456 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:35.456 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:35.456 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:35.456 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:35.456 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:35.456 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.456 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.456 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.456 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:35.456 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:35.456 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:36.020 00:23:36.021 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:36.021 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:36.021 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:36.278 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.278 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:36.278 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.278 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:36.278 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.278 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:36.278 { 00:23:36.278 "auth": { 00:23:36.278 "dhgroup": "ffdhe6144", 00:23:36.278 "digest": "sha512", 00:23:36.278 "state": "completed" 00:23:36.278 }, 00:23:36.278 "cntlid": 129, 00:23:36.278 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:23:36.278 "listen_address": { 00:23:36.278 "adrfam": "IPv4", 00:23:36.278 "traddr": "10.0.0.3", 00:23:36.278 "trsvcid": "4420", 00:23:36.278 "trtype": "TCP" 00:23:36.278 }, 00:23:36.278 "peer_address": { 00:23:36.278 "adrfam": "IPv4", 00:23:36.278 "traddr": "10.0.0.1", 00:23:36.278 "trsvcid": "41280", 00:23:36.278 "trtype": "TCP" 00:23:36.278 }, 00:23:36.278 "qid": 0, 00:23:36.278 "state": "enabled", 00:23:36.278 "thread": "nvmf_tgt_poll_group_000" 00:23:36.278 } 00:23:36.278 ]' 00:23:36.278 12:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:36.278 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:36.278 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:36.278 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:36.278 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:36.278 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:36.278 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:36.278 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:36.535 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWI1NWY5YjdhYzZkMzBmZWJlZDRiMjEwNTk0N2MwYmJiYzVkYjIwMzdkNjRjMDgzTJxClg==: --dhchap-ctrl-secret DHHC-1:03:MGE5YzFjYzExNDY2MDk5ODZhMzJkZDBiOWE5ZTU5OWU5NGQwYWU4ZmIwY2I1MGI2ZDEyNzE0ZDBkZDFkZDYwNLfIQ3U=: 00:23:36.535 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:00:YWI1NWY5YjdhYzZkMzBmZWJlZDRiMjEwNTk0N2MwYmJiYzVkYjIwMzdkNjRjMDgzTJxClg==: --dhchap-ctrl-secret DHHC-1:03:MGE5YzFjYzExNDY2MDk5ODZhMzJkZDBiOWE5ZTU5OWU5NGQwYWU4ZmIwY2I1MGI2ZDEyNzE0ZDBkZDFkZDYwNLfIQ3U=: 00:23:37.101 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:37.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:37.101 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:23:37.101 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.101 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.101 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.101 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:37.101 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:37.101 12:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:37.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:23:37.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:37.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:37.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:37.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:37.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:37.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:37.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.359 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.360 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.360 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:37.360 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:37.360 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:37.925 00:23:37.925 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:37.925 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:37.925 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:37.925 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.925 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:37.925 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.925 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.925 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.925 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:37.925 { 00:23:37.925 "auth": { 00:23:37.925 "dhgroup": "ffdhe6144", 00:23:37.925 "digest": "sha512", 00:23:37.925 "state": "completed" 00:23:37.925 }, 00:23:37.925 "cntlid": 131, 00:23:37.925 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:23:37.925 "listen_address": { 00:23:37.925 "adrfam": "IPv4", 00:23:37.925 "traddr": "10.0.0.3", 00:23:37.925 "trsvcid": "4420", 00:23:37.925 "trtype": "TCP" 00:23:37.925 }, 00:23:37.925 "peer_address": { 00:23:37.925 "adrfam": "IPv4", 00:23:37.925 "traddr": "10.0.0.1", 00:23:37.925 "trsvcid": "41310", 00:23:37.925 "trtype": "TCP" 00:23:37.925 }, 00:23:37.925 "qid": 0, 00:23:37.925 "state": "enabled", 00:23:37.925 "thread": "nvmf_tgt_poll_group_000" 00:23:37.925 } 00:23:37.925 ]' 00:23:37.925 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:38.182 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:38.182 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:38.182 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:38.182 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:38.182 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:38.182 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:38.183 12:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:38.440 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmE3MzNkYWY0NmMyMWY0OTAzMGNhMzIzMWQxMGJmNzm5Zgcu: --dhchap-ctrl-secret DHHC-1:02:ZWVkMjdkYjYxYTVlODQ3MmQwOTg3NTQ2ZDdhYmI4M2JlY2FiN2U3ODUyZDEzMDRkpA1b6w==: 00:23:38.440 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:01:YmE3MzNkYWY0NmMyMWY0OTAzMGNhMzIzMWQxMGJmNzm5Zgcu: --dhchap-ctrl-secret DHHC-1:02:ZWVkMjdkYjYxYTVlODQ3MmQwOTg3NTQ2ZDdhYmI4M2JlY2FiN2U3ODUyZDEzMDRkpA1b6w==: 00:23:39.005 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:39.005 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:39.005 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:23:39.005 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.005 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:39.005 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.005 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:39.005 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:39.005 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:39.264 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:23:39.264 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:39.264 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:39.264 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:39.264 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:39.264 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:39.264 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:39.264 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.264 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:39.264 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.264 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:39.264 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:39.264 12:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:39.521 00:23:39.521 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:39.521 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:39.521 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:39.779 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.779 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:39.779 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.779 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:39.779 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.779 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:39.779 { 00:23:39.779 "auth": { 00:23:39.780 "dhgroup": "ffdhe6144", 00:23:39.780 "digest": "sha512", 00:23:39.780 "state": "completed" 00:23:39.780 }, 00:23:39.780 "cntlid": 133, 00:23:39.780 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:23:39.780 "listen_address": { 00:23:39.780 "adrfam": "IPv4", 00:23:39.780 "traddr": "10.0.0.3", 00:23:39.780 "trsvcid": "4420", 00:23:39.780 "trtype": "TCP" 00:23:39.780 }, 00:23:39.780 "peer_address": { 00:23:39.780 "adrfam": "IPv4", 00:23:39.780 "traddr": "10.0.0.1", 00:23:39.780 "trsvcid": "38948", 00:23:39.780 "trtype": "TCP" 00:23:39.780 }, 00:23:39.780 "qid": 0, 00:23:39.780 "state": "enabled", 00:23:39.780 "thread": "nvmf_tgt_poll_group_000" 00:23:39.780 } 00:23:39.780 ]' 00:23:39.780 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:39.780 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:39.780 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:39.780 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:39.780 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:39.780 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:39.780 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:39.780 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:40.037 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmYyZjQxZTFmY2VjMzVkMWYxOGFmMzgxYjhjN2I4NGE1NjZlNzkwNWE3NWE2OWI5FkTRXA==: --dhchap-ctrl-secret DHHC-1:01:ZmIwMGJiMjI4OTIxZDM3YzQ2NWVjMjY1NTlkNDQ2ZDYpEJql: 00:23:40.037 12:22:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:02:NmYyZjQxZTFmY2VjMzVkMWYxOGFmMzgxYjhjN2I4NGE1NjZlNzkwNWE3NWE2OWI5FkTRXA==: --dhchap-ctrl-secret DHHC-1:01:ZmIwMGJiMjI4OTIxZDM3YzQ2NWVjMjY1NTlkNDQ2ZDYpEJql: 00:23:40.664 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:40.664 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:40.664 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:23:40.664 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.664 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:40.936 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.936 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:40.936 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:40.936 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:40.936 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:23:40.936 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:40.936 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:40.936 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:40.936 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:40.936 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:40.936 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key3 00:23:40.936 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.936 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:40.936 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.936 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:40.936 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:40.936 12:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:41.500 00:23:41.500 12:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:41.500 12:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:41.500 12:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:41.500 12:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.500 12:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:41.500 12:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.500 12:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:41.500 12:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.500 12:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:41.500 { 00:23:41.500 "auth": { 00:23:41.500 "dhgroup": "ffdhe6144", 00:23:41.500 "digest": "sha512", 00:23:41.500 "state": "completed" 00:23:41.500 }, 00:23:41.500 "cntlid": 135, 00:23:41.500 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:23:41.500 "listen_address": { 00:23:41.500 "adrfam": "IPv4", 00:23:41.500 "traddr": "10.0.0.3", 00:23:41.500 "trsvcid": "4420", 00:23:41.500 "trtype": "TCP" 00:23:41.500 }, 00:23:41.500 "peer_address": { 00:23:41.500 "adrfam": "IPv4", 00:23:41.500 "traddr": "10.0.0.1", 00:23:41.500 "trsvcid": "38982", 00:23:41.500 "trtype": "TCP" 00:23:41.500 }, 00:23:41.500 "qid": 0, 00:23:41.500 "state": "enabled", 00:23:41.500 "thread": "nvmf_tgt_poll_group_000" 00:23:41.500 } 00:23:41.500 ]' 00:23:41.500 12:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:41.758 12:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:41.758 12:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:41.758 12:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:41.758 12:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:41.758 12:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:41.758 12:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:41.758 12:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:42.015 12:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2UyNjUyY2YwZTcxNzU5NDA2MjBjYjM4OGYxNjE0MmMzNjJmN2FkY2I5MGE0Mzc2NThkZTZkZjUxMjA1OTFiZRcJPQM=: 00:23:42.016 12:22:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:03:Y2UyNjUyY2YwZTcxNzU5NDA2MjBjYjM4OGYxNjE0MmMzNjJmN2FkY2I5MGE0Mzc2NThkZTZkZjUxMjA1OTFiZRcJPQM=: 00:23:42.582 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:42.582 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:42.582 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:23:42.582 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.582 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.582 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.582 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:42.582 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:42.582 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:42.582 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:42.840 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:23:42.840 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:42.840 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:42.840 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:42.840 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:42.840 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:42.840 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:42.840 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.840 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.840 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.840 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:42.840 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:42.840 12:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:43.407 00:23:43.407 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:43.407 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:43.407 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:43.407 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.407 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:43.407 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.407 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:43.407 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.407 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:43.407 { 00:23:43.407 "auth": { 00:23:43.407 "dhgroup": "ffdhe8192", 00:23:43.407 "digest": "sha512", 00:23:43.407 "state": "completed" 00:23:43.407 }, 00:23:43.407 "cntlid": 137, 00:23:43.407 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:23:43.407 "listen_address": { 00:23:43.407 "adrfam": "IPv4", 00:23:43.407 "traddr": "10.0.0.3", 00:23:43.407 "trsvcid": "4420", 00:23:43.407 "trtype": "TCP" 00:23:43.407 }, 00:23:43.407 "peer_address": { 00:23:43.407 "adrfam": "IPv4", 00:23:43.407 "traddr": "10.0.0.1", 00:23:43.407 "trsvcid": "39024", 00:23:43.407 "trtype": "TCP" 00:23:43.407 }, 00:23:43.407 "qid": 0, 00:23:43.407 "state": "enabled", 00:23:43.407 "thread": "nvmf_tgt_poll_group_000" 00:23:43.407 } 00:23:43.407 ]' 00:23:43.407 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:43.407 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:43.407 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:43.665 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:43.665 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:43.665 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:43.665 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:43.665 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:43.665 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWI1NWY5YjdhYzZkMzBmZWJlZDRiMjEwNTk0N2MwYmJiYzVkYjIwMzdkNjRjMDgzTJxClg==: --dhchap-ctrl-secret DHHC-1:03:MGE5YzFjYzExNDY2MDk5ODZhMzJkZDBiOWE5ZTU5OWU5NGQwYWU4ZmIwY2I1MGI2ZDEyNzE0ZDBkZDFkZDYwNLfIQ3U=: 00:23:43.665 12:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:00:YWI1NWY5YjdhYzZkMzBmZWJlZDRiMjEwNTk0N2MwYmJiYzVkYjIwMzdkNjRjMDgzTJxClg==: --dhchap-ctrl-secret DHHC-1:03:MGE5YzFjYzExNDY2MDk5ODZhMzJkZDBiOWE5ZTU5OWU5NGQwYWU4ZmIwY2I1MGI2ZDEyNzE0ZDBkZDFkZDYwNLfIQ3U=: 00:23:44.230 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:44.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:44.488 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:23:44.488 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.488 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:44.488 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.488 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:44.488 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:44.488 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:44.488 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:23:44.488 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:44.488 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:44.488 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:44.488 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:44.488 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:44.488 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:44.488 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.488 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:44.488 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.488 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:44.488 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:44.488 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:45.054 00:23:45.054 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:45.054 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:45.054 12:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:45.312 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:45.312 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:45.312 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.312 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:45.312 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.312 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:45.312 { 00:23:45.312 "auth": { 00:23:45.312 "dhgroup": "ffdhe8192", 00:23:45.312 "digest": "sha512", 00:23:45.312 "state": "completed" 00:23:45.312 }, 00:23:45.312 "cntlid": 139, 00:23:45.312 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:23:45.312 "listen_address": { 00:23:45.312 "adrfam": "IPv4", 00:23:45.312 "traddr": "10.0.0.3", 00:23:45.312 "trsvcid": "4420", 00:23:45.312 "trtype": "TCP" 00:23:45.312 }, 00:23:45.312 "peer_address": { 00:23:45.312 "adrfam": "IPv4", 00:23:45.312 "traddr": "10.0.0.1", 00:23:45.312 "trsvcid": "39052", 00:23:45.312 "trtype": "TCP" 00:23:45.312 }, 00:23:45.312 "qid": 0, 00:23:45.312 "state": "enabled", 00:23:45.312 "thread": "nvmf_tgt_poll_group_000" 00:23:45.312 } 00:23:45.312 ]' 00:23:45.312 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:45.313 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:45.313 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:45.313 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:45.313 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:45.313 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:45.313 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:45.313 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:45.570 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YmE3MzNkYWY0NmMyMWY0OTAzMGNhMzIzMWQxMGJmNzm5Zgcu: --dhchap-ctrl-secret DHHC-1:02:ZWVkMjdkYjYxYTVlODQ3MmQwOTg3NTQ2ZDdhYmI4M2JlY2FiN2U3ODUyZDEzMDRkpA1b6w==: 00:23:45.571 12:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:01:YmE3MzNkYWY0NmMyMWY0OTAzMGNhMzIzMWQxMGJmNzm5Zgcu: --dhchap-ctrl-secret DHHC-1:02:ZWVkMjdkYjYxYTVlODQ3MmQwOTg3NTQ2ZDdhYmI4M2JlY2FiN2U3ODUyZDEzMDRkpA1b6w==: 00:23:46.507 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:46.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:46.507 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:23:46.507 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.507 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.507 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.507 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:46.507 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:46.507 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:46.507 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:23:46.507 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:46.507 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:46.507 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:46.507 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:46.507 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:46.507 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:46.507 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.507 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:46.507 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.507 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:46.507 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:46.507 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:47.085 00:23:47.085 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:47.085 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:47.085 12:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:47.398 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.398 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:47.398 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.398 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.398 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.398 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:47.398 { 00:23:47.398 "auth": { 00:23:47.398 "dhgroup": "ffdhe8192", 00:23:47.398 "digest": "sha512", 00:23:47.398 "state": "completed" 00:23:47.398 }, 00:23:47.398 "cntlid": 141, 00:23:47.399 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:23:47.399 "listen_address": { 00:23:47.399 "adrfam": "IPv4", 00:23:47.399 "traddr": "10.0.0.3", 00:23:47.399 "trsvcid": "4420", 00:23:47.399 "trtype": "TCP" 00:23:47.399 }, 00:23:47.399 "peer_address": { 00:23:47.399 "adrfam": "IPv4", 00:23:47.399 "traddr": "10.0.0.1", 00:23:47.399 "trsvcid": "39088", 00:23:47.399 "trtype": "TCP" 00:23:47.399 }, 00:23:47.399 "qid": 0, 00:23:47.399 "state": "enabled", 00:23:47.399 "thread": "nvmf_tgt_poll_group_000" 00:23:47.399 } 00:23:47.399 ]' 00:23:47.399 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:47.399 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:47.399 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:47.399 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:47.399 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:47.399 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:47.399 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:47.399 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:47.658 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmYyZjQxZTFmY2VjMzVkMWYxOGFmMzgxYjhjN2I4NGE1NjZlNzkwNWE3NWE2OWI5FkTRXA==: --dhchap-ctrl-secret DHHC-1:01:ZmIwMGJiMjI4OTIxZDM3YzQ2NWVjMjY1NTlkNDQ2ZDYpEJql: 00:23:47.658 12:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:02:NmYyZjQxZTFmY2VjMzVkMWYxOGFmMzgxYjhjN2I4NGE1NjZlNzkwNWE3NWE2OWI5FkTRXA==: --dhchap-ctrl-secret DHHC-1:01:ZmIwMGJiMjI4OTIxZDM3YzQ2NWVjMjY1NTlkNDQ2ZDYpEJql: 00:23:48.226 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:48.226 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:48.226 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:23:48.226 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.226 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:48.226 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.227 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:48.227 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:48.227 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:48.485 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:23:48.485 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:48.485 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:48.485 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:48.485 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:48.485 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:48.485 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key3 00:23:48.485 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.485 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:48.485 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.485 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:48.485 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:48.485 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:49.052 00:23:49.052 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:49.052 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:49.052 12:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:49.313 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.313 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:49.313 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.313 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:49.313 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.313 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:49.313 { 00:23:49.313 "auth": { 00:23:49.313 "dhgroup": "ffdhe8192", 00:23:49.313 "digest": "sha512", 00:23:49.313 "state": "completed" 00:23:49.313 }, 00:23:49.313 "cntlid": 143, 00:23:49.313 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:23:49.313 "listen_address": { 00:23:49.313 "adrfam": "IPv4", 00:23:49.313 "traddr": "10.0.0.3", 00:23:49.313 "trsvcid": "4420", 00:23:49.313 "trtype": "TCP" 00:23:49.313 }, 00:23:49.313 "peer_address": { 00:23:49.313 "adrfam": "IPv4", 00:23:49.313 "traddr": "10.0.0.1", 00:23:49.313 "trsvcid": "39126", 00:23:49.313 "trtype": "TCP" 00:23:49.313 }, 00:23:49.313 "qid": 0, 00:23:49.313 "state": "enabled", 00:23:49.313 "thread": "nvmf_tgt_poll_group_000" 00:23:49.313 } 00:23:49.313 ]' 00:23:49.313 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:49.313 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:49.313 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:49.313 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:49.313 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:49.313 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:49.313 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:49.313 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:49.575 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2UyNjUyY2YwZTcxNzU5NDA2MjBjYjM4OGYxNjE0MmMzNjJmN2FkY2I5MGE0Mzc2NThkZTZkZjUxMjA1OTFiZRcJPQM=: 00:23:49.575 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:03:Y2UyNjUyY2YwZTcxNzU5NDA2MjBjYjM4OGYxNjE0MmMzNjJmN2FkY2I5MGE0Mzc2NThkZTZkZjUxMjA1OTFiZRcJPQM=: 00:23:50.142 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:50.142 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:50.142 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:23:50.142 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.142 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:50.142 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.142 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:23:50.142 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:23:50.142 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:23:50.142 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:50.142 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:50.142 12:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:50.400 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:23:50.400 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:50.400 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:50.400 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:50.400 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:50.400 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:50.400 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:50.400 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.400 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:50.400 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.400 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:50.400 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:50.400 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:50.965 00:23:50.965 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:50.965 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:50.965 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:51.222 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.222 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:51.222 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.222 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:51.222 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.222 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:51.222 { 00:23:51.222 "auth": { 00:23:51.222 "dhgroup": "ffdhe8192", 00:23:51.222 "digest": "sha512", 00:23:51.222 "state": "completed" 00:23:51.222 }, 00:23:51.222 "cntlid": 145, 00:23:51.222 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:23:51.222 "listen_address": { 00:23:51.222 "adrfam": "IPv4", 00:23:51.222 "traddr": "10.0.0.3", 00:23:51.222 "trsvcid": "4420", 00:23:51.222 "trtype": "TCP" 00:23:51.222 }, 00:23:51.222 "peer_address": { 00:23:51.222 "adrfam": "IPv4", 00:23:51.222 "traddr": "10.0.0.1", 00:23:51.222 "trsvcid": "59752", 00:23:51.222 "trtype": "TCP" 00:23:51.222 }, 00:23:51.222 "qid": 0, 00:23:51.222 "state": "enabled", 00:23:51.223 "thread": "nvmf_tgt_poll_group_000" 00:23:51.223 } 00:23:51.223 ]' 00:23:51.223 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:51.223 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:51.223 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:51.223 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:51.223 12:22:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:51.223 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:51.223 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:51.223 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:51.482 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YWI1NWY5YjdhYzZkMzBmZWJlZDRiMjEwNTk0N2MwYmJiYzVkYjIwMzdkNjRjMDgzTJxClg==: --dhchap-ctrl-secret DHHC-1:03:MGE5YzFjYzExNDY2MDk5ODZhMzJkZDBiOWE5ZTU5OWU5NGQwYWU4ZmIwY2I1MGI2ZDEyNzE0ZDBkZDFkZDYwNLfIQ3U=: 00:23:51.482 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:00:YWI1NWY5YjdhYzZkMzBmZWJlZDRiMjEwNTk0N2MwYmJiYzVkYjIwMzdkNjRjMDgzTJxClg==: --dhchap-ctrl-secret DHHC-1:03:MGE5YzFjYzExNDY2MDk5ODZhMzJkZDBiOWE5ZTU5OWU5NGQwYWU4ZmIwY2I1MGI2ZDEyNzE0ZDBkZDFkZDYwNLfIQ3U=: 00:23:52.047 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:52.047 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:52.047 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:23:52.047 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.047 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:52.047 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.047 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key1 00:23:52.047 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.047 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:52.047 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.047 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:23:52.047 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:52.047 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:23:52.047 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:52.047 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:52.047 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:52.047 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:52.047 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:23:52.047 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:23:52.047 12:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:23:52.614 2024/11/25 12:22:48 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:23:52.614 request: 00:23:52.614 { 00:23:52.614 "method": "bdev_nvme_attach_controller", 00:23:52.614 "params": { 00:23:52.614 "name": "nvme0", 00:23:52.614 "trtype": "tcp", 00:23:52.614 "traddr": "10.0.0.3", 00:23:52.614 "adrfam": "ipv4", 00:23:52.614 "trsvcid": "4420", 00:23:52.614 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:52.614 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:23:52.614 "prchk_reftag": false, 00:23:52.614 "prchk_guard": false, 00:23:52.614 "hdgst": false, 00:23:52.614 "ddgst": false, 00:23:52.614 "dhchap_key": "key2", 00:23:52.614 "allow_unrecognized_csi": false 00:23:52.614 } 00:23:52.614 } 00:23:52.614 Got JSON-RPC error response 00:23:52.614 GoRPCClient: error on JSON-RPC call 00:23:52.614 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:52.614 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:52.614 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:52.614 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:52.614 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:23:52.614 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.614 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:52.614 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.614 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:52.614 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.614 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:52.614 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.614 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:52.614 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:52.614 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:52.614 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:52.614 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:52.614 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:52.614 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:52.614 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:52.614 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:52.614 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:53.181 2024/11/25 12:22:48 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:23:53.181 request: 00:23:53.181 { 00:23:53.181 "method": "bdev_nvme_attach_controller", 00:23:53.181 "params": { 00:23:53.181 "name": "nvme0", 00:23:53.181 "trtype": "tcp", 00:23:53.181 "traddr": "10.0.0.3", 00:23:53.181 "adrfam": "ipv4", 00:23:53.181 "trsvcid": "4420", 00:23:53.181 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:53.181 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:23:53.181 "prchk_reftag": false, 00:23:53.181 "prchk_guard": false, 00:23:53.181 "hdgst": false, 00:23:53.181 "ddgst": false, 00:23:53.181 "dhchap_key": "key1", 00:23:53.181 "dhchap_ctrlr_key": "ckey2", 00:23:53.181 "allow_unrecognized_csi": false 00:23:53.181 } 00:23:53.181 } 00:23:53.181 Got JSON-RPC error response 00:23:53.181 GoRPCClient: error on JSON-RPC call 00:23:53.181 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:53.181 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:53.181 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:53.181 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:53.181 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:23:53.181 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.181 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:53.181 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.181 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key1 00:23:53.181 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.181 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:53.181 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.181 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:53.181 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:53.181 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:53.181 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:53.181 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:53.181 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:53.181 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:53.181 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:53.181 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:53.181 12:22:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:53.744 2024/11/25 12:22:49 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey1 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:23:53.744 request: 00:23:53.744 { 00:23:53.744 "method": "bdev_nvme_attach_controller", 00:23:53.744 "params": { 00:23:53.744 "name": "nvme0", 00:23:53.744 "trtype": "tcp", 00:23:53.744 "traddr": "10.0.0.3", 00:23:53.744 "adrfam": "ipv4", 00:23:53.744 "trsvcid": "4420", 00:23:53.744 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:53.744 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:23:53.744 "prchk_reftag": false, 00:23:53.744 "prchk_guard": false, 00:23:53.744 "hdgst": false, 00:23:53.744 "ddgst": false, 00:23:53.744 "dhchap_key": "key1", 00:23:53.744 "dhchap_ctrlr_key": "ckey1", 00:23:53.744 "allow_unrecognized_csi": false 00:23:53.744 } 00:23:53.744 } 00:23:53.744 Got JSON-RPC error response 00:23:53.744 GoRPCClient: error on JSON-RPC call 00:23:53.744 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:53.744 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:53.744 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:53.744 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:53.744 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:23:53.744 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.744 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:53.744 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.744 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 75900 00:23:53.744 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 75900 ']' 00:23:53.744 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 75900 00:23:53.744 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:23:53.744 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:53.744 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75900 00:23:53.744 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:53.744 killing process with pid 75900 00:23:53.744 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:53.744 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75900' 00:23:53.744 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 75900 00:23:53.745 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 75900 00:23:54.001 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:23:54.001 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:54.001 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:54.001 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:54.001 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=80515 00:23:54.001 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 80515 00:23:54.001 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:23:54.001 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 80515 ']' 00:23:54.001 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:54.001 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:54.001 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:54.001 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:54.001 12:22:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:54.933 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:54.933 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:23:54.933 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:54.933 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:54.933 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:54.933 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:54.933 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:23:54.933 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 80515 00:23:54.933 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 80515 ']' 00:23:54.933 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:54.933 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:54.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:54.933 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:54.933 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:54.933 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:54.933 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:54.933 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:23:54.933 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:23:54.933 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.933 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:55.192 null0 00:23:55.192 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.192 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:55.192 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.cem 00:23:55.192 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.192 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:55.192 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.192 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.lFs ]] 00:23:55.192 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.lFs 00:23:55.192 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.192 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:55.192 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.192 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:55.192 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.RyN 00:23:55.192 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.192 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:55.192 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.192 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.CbZ ]] 00:23:55.192 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.CbZ 00:23:55.192 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.192 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:55.192 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.192 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:55.192 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.8mg 00:23:55.192 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.192 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:55.192 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.192 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.kJv ]] 00:23:55.192 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.kJv 00:23:55.192 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.192 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:55.192 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.192 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:55.192 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.no1 00:23:55.192 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.192 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:55.192 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.192 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:23:55.192 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:23:55.192 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:55.192 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:55.192 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:55.192 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:55.192 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:55.192 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key3 00:23:55.192 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.192 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:55.192 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.192 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:55.192 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:55.192 12:22:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:56.125 nvme0n1 00:23:56.125 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:56.125 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:56.125 12:22:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:56.384 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:56.384 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:56.384 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.384 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:56.384 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.384 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:56.384 { 00:23:56.384 "auth": { 00:23:56.384 "dhgroup": "ffdhe8192", 00:23:56.384 "digest": "sha512", 00:23:56.384 "state": "completed" 00:23:56.384 }, 00:23:56.384 "cntlid": 1, 00:23:56.384 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:23:56.384 "listen_address": { 00:23:56.384 "adrfam": "IPv4", 00:23:56.384 "traddr": "10.0.0.3", 00:23:56.384 "trsvcid": "4420", 00:23:56.384 "trtype": "TCP" 00:23:56.384 }, 00:23:56.384 "peer_address": { 00:23:56.384 "adrfam": "IPv4", 00:23:56.384 "traddr": "10.0.0.1", 00:23:56.384 "trsvcid": "59814", 00:23:56.384 "trtype": "TCP" 00:23:56.384 }, 00:23:56.384 "qid": 0, 00:23:56.384 "state": "enabled", 00:23:56.384 "thread": "nvmf_tgt_poll_group_000" 00:23:56.384 } 00:23:56.384 ]' 00:23:56.384 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:56.384 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:56.384 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:56.384 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:56.384 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:56.384 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:56.384 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:56.384 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:56.642 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:Y2UyNjUyY2YwZTcxNzU5NDA2MjBjYjM4OGYxNjE0MmMzNjJmN2FkY2I5MGE0Mzc2NThkZTZkZjUxMjA1OTFiZRcJPQM=: 00:23:56.642 12:22:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:03:Y2UyNjUyY2YwZTcxNzU5NDA2MjBjYjM4OGYxNjE0MmMzNjJmN2FkY2I5MGE0Mzc2NThkZTZkZjUxMjA1OTFiZRcJPQM=: 00:23:57.207 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:57.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:57.464 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:23:57.464 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.464 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:57.464 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.464 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key3 00:23:57.465 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.465 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:57.465 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.465 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:23:57.465 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:23:57.465 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:23:57.465 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:57.465 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:23:57.465 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:57.465 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:57.465 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:57.465 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:57.465 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:57.465 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:57.465 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:57.722 2024/11/25 12:22:53 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:23:57.722 request: 00:23:57.722 { 00:23:57.722 "method": "bdev_nvme_attach_controller", 00:23:57.722 "params": { 00:23:57.722 "name": "nvme0", 00:23:57.722 "trtype": "tcp", 00:23:57.722 "traddr": "10.0.0.3", 00:23:57.722 "adrfam": "ipv4", 00:23:57.722 "trsvcid": "4420", 00:23:57.722 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:57.722 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:23:57.723 "prchk_reftag": false, 00:23:57.723 "prchk_guard": false, 00:23:57.723 "hdgst": false, 00:23:57.723 "ddgst": false, 00:23:57.723 "dhchap_key": "key3", 00:23:57.723 "allow_unrecognized_csi": false 00:23:57.723 } 00:23:57.723 } 00:23:57.723 Got JSON-RPC error response 00:23:57.723 GoRPCClient: error on JSON-RPC call 00:23:57.723 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:57.723 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:57.723 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:57.723 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:57.723 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:23:57.723 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:23:57.723 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:57.723 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:57.981 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:23:57.981 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:57.981 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:23:57.981 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:57.981 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:57.981 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:57.981 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:57.981 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:57.981 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:57.981 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:58.239 2024/11/25 12:22:53 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key3 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:23:58.239 request: 00:23:58.239 { 00:23:58.239 "method": "bdev_nvme_attach_controller", 00:23:58.239 "params": { 00:23:58.239 "name": "nvme0", 00:23:58.239 "trtype": "tcp", 00:23:58.239 "traddr": "10.0.0.3", 00:23:58.239 "adrfam": "ipv4", 00:23:58.239 "trsvcid": "4420", 00:23:58.239 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:58.239 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:23:58.239 "prchk_reftag": false, 00:23:58.239 "prchk_guard": false, 00:23:58.239 "hdgst": false, 00:23:58.239 "ddgst": false, 00:23:58.239 "dhchap_key": "key3", 00:23:58.239 "allow_unrecognized_csi": false 00:23:58.239 } 00:23:58.239 } 00:23:58.239 Got JSON-RPC error response 00:23:58.239 GoRPCClient: error on JSON-RPC call 00:23:58.239 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:58.239 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:58.239 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:58.239 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:58.239 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:23:58.239 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:23:58.239 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:23:58.239 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:58.239 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:58.239 12:22:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:58.497 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:23:58.497 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.497 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:58.497 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.497 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:23:58.497 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.497 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:58.497 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.497 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:58.497 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:58.497 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:58.497 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:58.497 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:58.497 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:58.497 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:58.498 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:58.498 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:58.498 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:58.756 2024/11/25 12:22:54 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:key1 dhchap_key:key0 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:23:58.756 request: 00:23:58.756 { 00:23:58.756 "method": "bdev_nvme_attach_controller", 00:23:58.756 "params": { 00:23:58.756 "name": "nvme0", 00:23:58.756 "trtype": "tcp", 00:23:58.756 "traddr": "10.0.0.3", 00:23:58.756 "adrfam": "ipv4", 00:23:58.756 "trsvcid": "4420", 00:23:58.756 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:58.756 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:23:58.756 "prchk_reftag": false, 00:23:58.756 "prchk_guard": false, 00:23:58.756 "hdgst": false, 00:23:58.756 "ddgst": false, 00:23:58.756 "dhchap_key": "key0", 00:23:58.756 "dhchap_ctrlr_key": "key1", 00:23:58.756 "allow_unrecognized_csi": false 00:23:58.756 } 00:23:58.756 } 00:23:58.756 Got JSON-RPC error response 00:23:58.756 GoRPCClient: error on JSON-RPC call 00:23:58.756 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:58.756 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:58.756 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:58.756 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:58.756 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:23:58.756 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:23:58.756 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:23:59.014 nvme0n1 00:23:59.014 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:23:59.014 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:23:59.014 12:22:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:59.272 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.272 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:59.272 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:59.530 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key1 00:23:59.530 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.530 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:59.530 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.530 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:23:59.530 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:59.530 12:22:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:24:00.464 nvme0n1 00:24:00.464 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:24:00.464 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:24:00.464 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:00.464 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.464 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:00.464 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.464 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:00.464 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.464 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:24:00.464 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:00.464 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:24:00.722 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.722 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NmYyZjQxZTFmY2VjMzVkMWYxOGFmMzgxYjhjN2I4NGE1NjZlNzkwNWE3NWE2OWI5FkTRXA==: --dhchap-ctrl-secret DHHC-1:03:Y2UyNjUyY2YwZTcxNzU5NDA2MjBjYjM4OGYxNjE0MmMzNjJmN2FkY2I5MGE0Mzc2NThkZTZkZjUxMjA1OTFiZRcJPQM=: 00:24:00.722 12:22:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid 5934b511-67a7-4c73-84f0-52fdf36bb79d -l 0 --dhchap-secret DHHC-1:02:NmYyZjQxZTFmY2VjMzVkMWYxOGFmMzgxYjhjN2I4NGE1NjZlNzkwNWE3NWE2OWI5FkTRXA==: --dhchap-ctrl-secret DHHC-1:03:Y2UyNjUyY2YwZTcxNzU5NDA2MjBjYjM4OGYxNjE0MmMzNjJmN2FkY2I5MGE0Mzc2NThkZTZkZjUxMjA1OTFiZRcJPQM=: 00:24:01.287 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:24:01.287 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:24:01.287 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:24:01.287 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:24:01.287 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:24:01.287 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:24:01.287 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:24:01.287 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:01.287 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:01.545 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:24:01.545 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:24:01.545 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:24:01.545 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:24:01.545 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:01.545 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:24:01.545 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:01.545 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:24:01.545 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:24:01.545 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:24:02.111 2024/11/25 12:22:57 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-03.io.spdk:cnode0 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:24:02.111 request: 00:24:02.111 { 00:24:02.111 "method": "bdev_nvme_attach_controller", 00:24:02.111 "params": { 00:24:02.111 "name": "nvme0", 00:24:02.111 "trtype": "tcp", 00:24:02.111 "traddr": "10.0.0.3", 00:24:02.111 "adrfam": "ipv4", 00:24:02.111 "trsvcid": "4420", 00:24:02.111 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:24:02.111 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d", 00:24:02.111 "prchk_reftag": false, 00:24:02.111 "prchk_guard": false, 00:24:02.111 "hdgst": false, 00:24:02.111 "ddgst": false, 00:24:02.111 "dhchap_key": "key1", 00:24:02.111 "allow_unrecognized_csi": false 00:24:02.111 } 00:24:02.111 } 00:24:02.111 Got JSON-RPC error response 00:24:02.111 GoRPCClient: error on JSON-RPC call 00:24:02.111 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:24:02.111 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:02.111 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:02.111 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:02.111 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:02.111 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:02.111 12:22:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:03.044 nvme0n1 00:24:03.044 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:24:03.044 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:03.044 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:24:03.302 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.302 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:03.302 12:22:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:03.302 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:24:03.302 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.302 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:03.302 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.302 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:24:03.302 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:24:03.302 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:24:03.559 nvme0n1 00:24:03.834 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:24:03.834 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:24:03.834 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:03.835 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.835 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:24:03.835 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:24:04.092 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:04.092 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.092 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:04.092 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.092 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YmE3MzNkYWY0NmMyMWY0OTAzMGNhMzIzMWQxMGJmNzm5Zgcu: '' 2s 00:24:04.092 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:24:04.092 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:24:04.092 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YmE3MzNkYWY0NmMyMWY0OTAzMGNhMzIzMWQxMGJmNzm5Zgcu: 00:24:04.092 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:24:04.092 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:24:04.092 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:24:04.093 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YmE3MzNkYWY0NmMyMWY0OTAzMGNhMzIzMWQxMGJmNzm5Zgcu: ]] 00:24:04.093 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YmE3MzNkYWY0NmMyMWY0OTAzMGNhMzIzMWQxMGJmNzm5Zgcu: 00:24:04.093 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:24:04.093 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:24:04.093 12:22:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:24:06.621 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:24:06.621 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:24:06.621 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:24:06.621 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:06.621 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:06.621 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:24:06.621 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:24:06.621 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key1 --dhchap-ctrlr-key key2 00:24:06.621 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.621 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:06.621 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.621 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NmYyZjQxZTFmY2VjMzVkMWYxOGFmMzgxYjhjN2I4NGE1NjZlNzkwNWE3NWE2OWI5FkTRXA==: 2s 00:24:06.621 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:24:06.621 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:24:06.621 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:24:06.621 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NmYyZjQxZTFmY2VjMzVkMWYxOGFmMzgxYjhjN2I4NGE1NjZlNzkwNWE3NWE2OWI5FkTRXA==: 00:24:06.621 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:24:06.621 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:24:06.621 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:24:06.621 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NmYyZjQxZTFmY2VjMzVkMWYxOGFmMzgxYjhjN2I4NGE1NjZlNzkwNWE3NWE2OWI5FkTRXA==: ]] 00:24:06.621 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NmYyZjQxZTFmY2VjMzVkMWYxOGFmMzgxYjhjN2I4NGE1NjZlNzkwNWE3NWE2OWI5FkTRXA==: 00:24:06.622 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:24:06.622 12:23:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:24:08.535 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:24:08.535 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:24:08.535 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:08.535 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:24:08.535 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:24:08.535 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:08.535 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:24:08.535 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:24:08.535 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:24:08.535 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:08.535 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:08.535 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:08.535 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:08.535 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:08.535 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:08.535 12:23:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:09.101 nvme0n1 00:24:09.101 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:09.101 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.101 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:09.101 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.101 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:09.101 12:23:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:09.668 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:24:09.668 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:09.668 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:24:09.927 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.927 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:24:09.927 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.927 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:09.927 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.927 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:24:09.927 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:24:09.927 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:24:09.927 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:09.927 12:23:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:24:10.493 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.493 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:10.493 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.493 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:10.493 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.493 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:10.493 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:24:10.493 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:10.493 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:24:10.493 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:10.493 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:24:10.493 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:10.493 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:10.493 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:24:11.059 2024/11/25 12:23:06 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:key3 dhchap_key:key1 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:24:11.059 request: 00:24:11.059 { 00:24:11.059 "method": "bdev_nvme_set_keys", 00:24:11.059 "params": { 00:24:11.059 "name": "nvme0", 00:24:11.059 "dhchap_key": "key1", 00:24:11.059 "dhchap_ctrlr_key": "key3" 00:24:11.059 } 00:24:11.059 } 00:24:11.059 Got JSON-RPC error response 00:24:11.059 GoRPCClient: error on JSON-RPC call 00:24:11.059 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:24:11.059 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:11.059 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:11.059 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:11.059 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:24:11.059 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:11.059 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:24:11.059 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:24:11.059 12:23:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:24:12.430 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:24:12.430 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:24:12.430 12:23:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:12.430 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:24:12.430 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key0 --dhchap-ctrlr-key key1 00:24:12.430 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.430 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:12.430 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.430 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:12.430 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:12.430 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:13.364 nvme0n1 00:24:13.364 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --dhchap-key key2 --dhchap-ctrlr-key key3 00:24:13.364 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.364 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:13.364 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.364 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:24:13.364 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:24:13.364 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:24:13.364 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:24:13.364 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:13.364 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:24:13.364 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:13.364 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:24:13.364 12:23:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:24:13.621 2024/11/25 12:23:09 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:key0 dhchap_key:key2 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:24:13.621 request: 00:24:13.621 { 00:24:13.621 "method": "bdev_nvme_set_keys", 00:24:13.621 "params": { 00:24:13.621 "name": "nvme0", 00:24:13.621 "dhchap_key": "key2", 00:24:13.621 "dhchap_ctrlr_key": "key0" 00:24:13.621 } 00:24:13.621 } 00:24:13.621 Got JSON-RPC error response 00:24:13.621 GoRPCClient: error on JSON-RPC call 00:24:13.621 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:24:13.621 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:13.621 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:13.621 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:13.879 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:24:13.879 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:13.879 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:24:13.879 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:24:13.880 12:23:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:24:15.253 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:24:15.253 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:24:15.253 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:24:15.253 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:24:15.253 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:24:15.253 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:24:15.253 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 75940 00:24:15.253 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 75940 ']' 00:24:15.253 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 75940 00:24:15.253 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:24:15.253 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:15.253 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75940 00:24:15.253 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:15.253 killing process with pid 75940 00:24:15.253 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:15.253 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75940' 00:24:15.253 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 75940 00:24:15.253 12:23:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 75940 00:24:15.511 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:24:15.511 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:15.511 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:24:15.769 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:15.769 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:24:15.769 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:15.769 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:15.769 rmmod nvme_tcp 00:24:15.769 rmmod nvme_fabrics 00:24:15.769 rmmod nvme_keyring 00:24:16.039 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:16.039 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:24:16.039 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:24:16.039 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 80515 ']' 00:24:16.039 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 80515 00:24:16.039 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 80515 ']' 00:24:16.039 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 80515 00:24:16.039 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:24:16.039 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:16.039 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80515 00:24:16.039 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:16.039 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:16.039 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80515' 00:24:16.039 killing process with pid 80515 00:24:16.039 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 80515 00:24:16.039 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 80515 00:24:16.039 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:16.039 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:16.039 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:16.039 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:24:16.039 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:24:16.039 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:16.039 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:24:16.039 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:16.039 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:16.039 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:16.039 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:16.039 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:16.039 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:16.039 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:16.039 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:16.039 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:16.039 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:16.039 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:16.296 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:16.296 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:16.296 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:16.296 12:23:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:16.296 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:16.296 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.296 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:16.296 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.296 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:24:16.296 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.cem /tmp/spdk.key-sha256.RyN /tmp/spdk.key-sha384.8mg /tmp/spdk.key-sha512.no1 /tmp/spdk.key-sha512.lFs /tmp/spdk.key-sha384.CbZ /tmp/spdk.key-sha256.kJv '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:24:16.296 ************************************ 00:24:16.296 END TEST nvmf_auth_target 00:24:16.296 ************************************ 00:24:16.296 00:24:16.296 real 2m36.299s 00:24:16.296 user 6m12.237s 00:24:16.296 sys 0m16.065s 00:24:16.296 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:16.296 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:24:16.296 12:23:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:24:16.296 12:23:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:24:16.296 12:23:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:16.296 12:23:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:16.296 12:23:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:16.296 ************************************ 00:24:16.296 START TEST nvmf_bdevio_no_huge 00:24:16.296 ************************************ 00:24:16.296 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:24:16.296 * Looking for test storage... 00:24:16.296 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:16.296 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:16.296 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:16.296 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:16.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.555 --rc genhtml_branch_coverage=1 00:24:16.555 --rc genhtml_function_coverage=1 00:24:16.555 --rc genhtml_legend=1 00:24:16.555 --rc geninfo_all_blocks=1 00:24:16.555 --rc geninfo_unexecuted_blocks=1 00:24:16.555 00:24:16.555 ' 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:16.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.555 --rc genhtml_branch_coverage=1 00:24:16.555 --rc genhtml_function_coverage=1 00:24:16.555 --rc genhtml_legend=1 00:24:16.555 --rc geninfo_all_blocks=1 00:24:16.555 --rc geninfo_unexecuted_blocks=1 00:24:16.555 00:24:16.555 ' 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:16.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.555 --rc genhtml_branch_coverage=1 00:24:16.555 --rc genhtml_function_coverage=1 00:24:16.555 --rc genhtml_legend=1 00:24:16.555 --rc geninfo_all_blocks=1 00:24:16.555 --rc geninfo_unexecuted_blocks=1 00:24:16.555 00:24:16.555 ' 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:16.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.555 --rc genhtml_branch_coverage=1 00:24:16.555 --rc genhtml_function_coverage=1 00:24:16.555 --rc genhtml_legend=1 00:24:16.555 --rc geninfo_all_blocks=1 00:24:16.555 --rc geninfo_unexecuted_blocks=1 00:24:16.555 00:24:16.555 ' 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.555 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:16.556 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:16.556 Cannot find device "nvmf_init_br" 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:16.556 Cannot find device "nvmf_init_br2" 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:16.556 Cannot find device "nvmf_tgt_br" 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:16.556 Cannot find device "nvmf_tgt_br2" 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:16.556 Cannot find device "nvmf_init_br" 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:16.556 Cannot find device "nvmf_init_br2" 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:16.556 Cannot find device "nvmf_tgt_br" 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:16.556 Cannot find device "nvmf_tgt_br2" 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:16.556 Cannot find device "nvmf_br" 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:16.556 Cannot find device "nvmf_init_if" 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:16.556 Cannot find device "nvmf_init_if2" 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:16.556 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:16.556 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:16.556 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:16.557 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:16.557 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:16.557 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:16.557 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:16.814 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:16.814 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:16.814 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:16.814 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:16.814 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:16.814 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:16.814 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:16.815 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:16.815 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:16.815 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:16.815 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:16.815 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:16.815 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:16.815 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:16.815 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:16.815 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:16.815 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:16.815 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:16.815 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:16.815 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:16.815 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:16.815 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:16.815 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:16.815 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:24:16.815 00:24:16.815 --- 10.0.0.3 ping statistics --- 00:24:16.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.815 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:24:16.815 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:16.815 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:16.815 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:24:16.815 00:24:16.815 --- 10.0.0.4 ping statistics --- 00:24:16.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.815 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:24:16.815 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:16.815 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:16.815 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.018 ms 00:24:16.815 00:24:16.815 --- 10.0.0.1 ping statistics --- 00:24:16.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.815 rtt min/avg/max/mdev = 0.018/0.018/0.018/0.000 ms 00:24:16.815 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:16.815 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:16.815 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:24:16.815 00:24:16.815 --- 10.0.0.2 ping statistics --- 00:24:16.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:16.815 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:24:16.815 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:16.815 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:24:16.815 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:16.815 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:16.815 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:16.815 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:16.815 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:16.815 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:16.815 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:16.815 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:24:16.815 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:16.815 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:16.815 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:16.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:16.815 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=81371 00:24:16.815 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 81371 00:24:16.815 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:24:16.815 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 81371 ']' 00:24:16.815 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:16.815 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:16.815 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:16.815 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:16.815 12:23:12 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:16.815 [2024-11-25 12:23:12.567139] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:24:16.815 [2024-11-25 12:23:12.567197] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:24:17.072 [2024-11-25 12:23:12.722735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:17.072 [2024-11-25 12:23:12.771810] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:17.072 [2024-11-25 12:23:12.771852] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:17.072 [2024-11-25 12:23:12.771858] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:17.072 [2024-11-25 12:23:12.771863] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:17.072 [2024-11-25 12:23:12.771868] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:17.072 [2024-11-25 12:23:12.772375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:17.072 [2024-11-25 12:23:12.772663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:24:17.072 [2024-11-25 12:23:12.772449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:24:17.072 [2024-11-25 12:23:12.772666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:17.638 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:17.638 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:24:17.638 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:17.638 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:17.638 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:17.638 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:17.638 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:17.638 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.638 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:17.638 [2024-11-25 12:23:13.426086] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:17.638 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.638 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:17.638 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.638 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:17.638 Malloc0 00:24:17.638 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.638 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:17.638 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.638 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:17.638 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.638 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:17.638 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.638 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:17.638 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.638 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:17.638 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.638 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:17.638 [2024-11-25 12:23:13.462568] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:17.638 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.638 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:24:17.638 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:24:17.638 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:24:17.638 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:24:17.638 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:17.638 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:17.638 { 00:24:17.638 "params": { 00:24:17.638 "name": "Nvme$subsystem", 00:24:17.638 "trtype": "$TEST_TRANSPORT", 00:24:17.638 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:17.638 "adrfam": "ipv4", 00:24:17.638 "trsvcid": "$NVMF_PORT", 00:24:17.638 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:17.638 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:17.638 "hdgst": ${hdgst:-false}, 00:24:17.638 "ddgst": ${ddgst:-false} 00:24:17.638 }, 00:24:17.638 "method": "bdev_nvme_attach_controller" 00:24:17.638 } 00:24:17.638 EOF 00:24:17.638 )") 00:24:17.638 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:24:17.638 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:24:17.638 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:24:17.638 12:23:13 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:17.638 "params": { 00:24:17.638 "name": "Nvme1", 00:24:17.638 "trtype": "tcp", 00:24:17.638 "traddr": "10.0.0.3", 00:24:17.638 "adrfam": "ipv4", 00:24:17.638 "trsvcid": "4420", 00:24:17.638 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:17.638 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:17.638 "hdgst": false, 00:24:17.638 "ddgst": false 00:24:17.638 }, 00:24:17.638 "method": "bdev_nvme_attach_controller" 00:24:17.638 }' 00:24:17.638 [2024-11-25 12:23:13.503363] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:24:17.638 [2024-11-25 12:23:13.503520] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid81420 ] 00:24:17.896 [2024-11-25 12:23:13.646710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:17.897 [2024-11-25 12:23:13.695925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:17.897 [2024-11-25 12:23:13.695982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:17.897 [2024-11-25 12:23:13.695984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:18.155 I/O targets: 00:24:18.155 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:24:18.155 00:24:18.155 00:24:18.155 CUnit - A unit testing framework for C - Version 2.1-3 00:24:18.155 http://cunit.sourceforge.net/ 00:24:18.155 00:24:18.155 00:24:18.155 Suite: bdevio tests on: Nvme1n1 00:24:18.155 Test: blockdev write read block ...passed 00:24:18.155 Test: blockdev write zeroes read block ...passed 00:24:18.155 Test: blockdev write zeroes read no split ...passed 00:24:18.155 Test: blockdev write zeroes read split ...passed 00:24:18.155 Test: blockdev write zeroes read split partial ...passed 00:24:18.155 Test: blockdev reset ...[2024-11-25 12:23:13.975792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:18.155 [2024-11-25 12:23:13.976020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20036e0 (9): Bad file descriptor 00:24:18.155 [2024-11-25 12:23:13.986721] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:24:18.155 passed 00:24:18.155 Test: blockdev write read 8 blocks ...passed 00:24:18.155 Test: blockdev write read size > 128k ...passed 00:24:18.155 Test: blockdev write read invalid size ...passed 00:24:18.413 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:18.413 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:18.413 Test: blockdev write read max offset ...passed 00:24:18.413 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:18.413 Test: blockdev writev readv 8 blocks ...passed 00:24:18.413 Test: blockdev writev readv 30 x 1block ...passed 00:24:18.413 Test: blockdev writev readv block ...passed 00:24:18.413 Test: blockdev writev readv size > 128k ...passed 00:24:18.413 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:18.413 Test: blockdev comparev and writev ...[2024-11-25 12:23:14.161117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:18.413 [2024-11-25 12:23:14.161154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.413 [2024-11-25 12:23:14.161164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:18.413 [2024-11-25 12:23:14.161169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:18.413 [2024-11-25 12:23:14.161357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:18.413 [2024-11-25 12:23:14.161363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:18.413 [2024-11-25 12:23:14.161372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:18.413 [2024-11-25 12:23:14.161377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:18.413 [2024-11-25 12:23:14.161569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:18.413 [2024-11-25 12:23:14.161576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:18.413 [2024-11-25 12:23:14.161585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:18.413 [2024-11-25 12:23:14.161589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:18.413 [2024-11-25 12:23:14.161791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:18.413 [2024-11-25 12:23:14.161797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:18.414 [2024-11-25 12:23:14.161806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:18.414 [2024-11-25 12:23:14.161811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:18.414 passed 00:24:18.414 Test: blockdev nvme passthru rw ...passed 00:24:18.414 Test: blockdev nvme passthru vendor specific ...[2024-11-25 12:23:14.246287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:18.414 [2024-11-25 12:23:14.246305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:18.414 [2024-11-25 12:23:14.246375] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:18.414 [2024-11-25 12:23:14.246381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:18.414 [2024-11-25 12:23:14.246455] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:18.414 [2024-11-25 12:23:14.246461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:18.414 passed 00:24:18.414 Test: blockdev nvme admin passthru ...[2024-11-25 12:23:14.246534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:18.414 [2024-11-25 12:23:14.246540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:18.414 passed 00:24:18.672 Test: blockdev copy ...passed 00:24:18.672 00:24:18.672 Run Summary: Type Total Ran Passed Failed Inactive 00:24:18.672 suites 1 1 n/a 0 0 00:24:18.672 tests 23 23 23 0 0 00:24:18.672 asserts 152 152 152 0 n/a 00:24:18.672 00:24:18.672 Elapsed time = 0.908 seconds 00:24:18.672 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:18.672 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.672 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:18.930 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.930 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:24:18.930 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:24:18.930 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:18.930 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:24:18.930 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:18.930 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:24:18.930 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:18.930 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:18.930 rmmod nvme_tcp 00:24:18.930 rmmod nvme_fabrics 00:24:18.930 rmmod nvme_keyring 00:24:18.930 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:18.930 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:24:18.930 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:24:18.930 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 81371 ']' 00:24:18.930 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 81371 00:24:18.930 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 81371 ']' 00:24:18.930 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 81371 00:24:18.930 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:24:18.930 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:18.930 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81371 00:24:18.930 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:24:18.930 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:24:18.930 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81371' 00:24:18.930 killing process with pid 81371 00:24:18.930 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 81371 00:24:18.930 12:23:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 81371 00:24:19.188 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:19.188 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:19.188 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:19.188 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:24:19.188 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:24:19.188 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:19.188 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:24:19.188 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:19.188 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:19.188 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:19.188 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:19.188 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:19.445 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:19.445 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:19.445 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:19.445 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:19.445 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:19.445 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:19.445 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:19.445 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:19.445 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:19.445 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:19.445 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:19.445 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:19.445 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:19.445 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.445 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:24:19.445 00:24:19.445 real 0m3.184s 00:24:19.445 user 0m10.456s 00:24:19.445 sys 0m1.082s 00:24:19.445 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:19.445 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:19.445 ************************************ 00:24:19.445 END TEST nvmf_bdevio_no_huge 00:24:19.445 ************************************ 00:24:19.445 12:23:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:24:19.445 12:23:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:19.445 12:23:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:19.445 12:23:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:19.445 ************************************ 00:24:19.445 START TEST nvmf_tls 00:24:19.445 ************************************ 00:24:19.445 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:24:19.704 * Looking for test storage... 00:24:19.704 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:19.704 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:19.704 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:24:19.704 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:19.704 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:19.704 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:19.704 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:19.704 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:19.704 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:24:19.704 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:24:19.704 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:24:19.704 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:24:19.704 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:24:19.704 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:24:19.704 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:24:19.704 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:19.704 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:24:19.704 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:24:19.704 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:19.704 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:19.704 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:24:19.704 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:24:19.704 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:19.704 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:24:19.704 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:24:19.704 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:24:19.704 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:24:19.704 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:19.704 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:24:19.704 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:24:19.704 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:19.704 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:19.704 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:24:19.704 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:19.704 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:19.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:19.704 --rc genhtml_branch_coverage=1 00:24:19.704 --rc genhtml_function_coverage=1 00:24:19.704 --rc genhtml_legend=1 00:24:19.704 --rc geninfo_all_blocks=1 00:24:19.704 --rc geninfo_unexecuted_blocks=1 00:24:19.704 00:24:19.704 ' 00:24:19.704 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:19.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:19.704 --rc genhtml_branch_coverage=1 00:24:19.704 --rc genhtml_function_coverage=1 00:24:19.704 --rc genhtml_legend=1 00:24:19.704 --rc geninfo_all_blocks=1 00:24:19.704 --rc geninfo_unexecuted_blocks=1 00:24:19.704 00:24:19.704 ' 00:24:19.704 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:19.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:19.704 --rc genhtml_branch_coverage=1 00:24:19.704 --rc genhtml_function_coverage=1 00:24:19.704 --rc genhtml_legend=1 00:24:19.704 --rc geninfo_all_blocks=1 00:24:19.704 --rc geninfo_unexecuted_blocks=1 00:24:19.704 00:24:19.704 ' 00:24:19.704 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:19.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:19.704 --rc genhtml_branch_coverage=1 00:24:19.704 --rc genhtml_function_coverage=1 00:24:19.704 --rc genhtml_legend=1 00:24:19.704 --rc geninfo_all_blocks=1 00:24:19.704 --rc geninfo_unexecuted_blocks=1 00:24:19.704 00:24:19.704 ' 00:24:19.704 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:19.704 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:24:19.704 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:19.704 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:19.704 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:19.704 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:19.705 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:19.705 Cannot find device "nvmf_init_br" 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:24:19.705 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:19.706 Cannot find device "nvmf_init_br2" 00:24:19.706 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:24:19.706 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:19.706 Cannot find device "nvmf_tgt_br" 00:24:19.706 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:24:19.706 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:19.706 Cannot find device "nvmf_tgt_br2" 00:24:19.706 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:24:19.706 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:19.706 Cannot find device "nvmf_init_br" 00:24:19.706 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:24:19.706 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:19.706 Cannot find device "nvmf_init_br2" 00:24:19.706 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:24:19.706 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:19.706 Cannot find device "nvmf_tgt_br" 00:24:19.706 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:24:19.706 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:19.706 Cannot find device "nvmf_tgt_br2" 00:24:19.706 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:24:19.706 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:19.706 Cannot find device "nvmf_br" 00:24:19.706 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:24:19.706 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:19.706 Cannot find device "nvmf_init_if" 00:24:19.706 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:24:19.706 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:19.706 Cannot find device "nvmf_init_if2" 00:24:19.706 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:24:19.706 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:19.706 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:19.706 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:24:19.706 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:19.706 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:19.963 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:19.964 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:19.964 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:24:19.964 00:24:19.964 --- 10.0.0.3 ping statistics --- 00:24:19.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.964 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:19.964 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:19.964 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.073 ms 00:24:19.964 00:24:19.964 --- 10.0.0.4 ping statistics --- 00:24:19.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.964 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:19.964 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:19.964 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:24:19.964 00:24:19.964 --- 10.0.0.1 ping statistics --- 00:24:19.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.964 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:19.964 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:19.964 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.036 ms 00:24:19.964 00:24:19.964 --- 10.0.0.2 ping statistics --- 00:24:19.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.964 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:19.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=81655 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 81655 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 81655 ']' 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:19.964 12:23:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:19.964 [2024-11-25 12:23:15.817530] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:24:19.964 [2024-11-25 12:23:15.817572] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:20.222 [2024-11-25 12:23:15.954250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.222 [2024-11-25 12:23:15.989425] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:20.222 [2024-11-25 12:23:15.989586] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:20.222 [2024-11-25 12:23:15.989597] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:20.222 [2024-11-25 12:23:15.989601] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:20.222 [2024-11-25 12:23:15.989607] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:20.222 [2024-11-25 12:23:15.989892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:21.155 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:21.155 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:21.155 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:21.155 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:21.155 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:21.155 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:21.155 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:24:21.155 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:24:21.155 true 00:24:21.155 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:21.155 12:23:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:24:21.413 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:24:21.413 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:24:21.413 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:24:21.691 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:24:21.691 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:21.691 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:24:21.691 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:24:21.691 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:24:21.949 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:24:21.949 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:22.206 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:24:22.206 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:24:22.206 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:22.206 12:23:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:24:22.464 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:24:22.464 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:24:22.464 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:24:22.721 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:22.721 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:24:22.979 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:24:22.979 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:24:22.979 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:24:22.979 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:22.979 12:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:24:23.239 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:24:23.239 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:24:23.239 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:24:23.239 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:24:23.239 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:24:23.239 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:24:23.239 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:24:23.239 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:24:23.239 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:24:23.239 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:23.239 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:24:23.239 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:24:23.239 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:24:23.239 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:24:23.239 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:24:23.239 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:24:23.239 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:24:23.239 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:24:23.239 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:24:23.239 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.5iuyUsj1ea 00:24:23.239 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:24:23.239 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.klscqPGRo8 00:24:23.239 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:23.239 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:24:23.239 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.5iuyUsj1ea 00:24:23.496 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.klscqPGRo8 00:24:23.496 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:24:23.496 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:24:23.754 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.5iuyUsj1ea 00:24:23.754 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.5iuyUsj1ea 00:24:23.754 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:24.013 [2024-11-25 12:23:19.809627] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:24.013 12:23:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:24.271 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:24:24.529 [2024-11-25 12:23:20.225672] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:24.530 [2024-11-25 12:23:20.225873] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:24.530 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:24.788 malloc0 00:24:24.788 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:24.788 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.5iuyUsj1ea 00:24:25.046 12:23:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:25.303 12:23:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.5iuyUsj1ea 00:24:37.495 Initializing NVMe Controllers 00:24:37.495 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:24:37.495 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:37.495 Initialization complete. Launching workers. 00:24:37.496 ======================================================== 00:24:37.496 Latency(us) 00:24:37.496 Device Information : IOPS MiB/s Average min max 00:24:37.496 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15856.59 61.94 4036.71 1068.99 8311.95 00:24:37.496 ======================================================== 00:24:37.496 Total : 15856.59 61.94 4036.71 1068.99 8311.95 00:24:37.496 00:24:37.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:37.496 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5iuyUsj1ea 00:24:37.496 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:37.496 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:37.496 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:37.496 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.5iuyUsj1ea 00:24:37.496 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:37.496 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=82014 00:24:37.496 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:37.496 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 82014 /var/tmp/bdevperf.sock 00:24:37.496 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 82014 ']' 00:24:37.496 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:37.496 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:37.496 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:37.496 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:37.496 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:37.496 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:37.496 [2024-11-25 12:23:31.287053] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:24:37.496 [2024-11-25 12:23:31.287122] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82014 ] 00:24:37.496 [2024-11-25 12:23:31.426124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.496 [2024-11-25 12:23:31.462080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:37.496 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:37.496 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:37.496 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5iuyUsj1ea 00:24:37.496 12:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:37.496 [2024-11-25 12:23:31.940543] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:37.496 TLSTESTn1 00:24:37.496 12:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:37.496 Running I/O for 10 seconds... 00:24:38.433 5867.00 IOPS, 22.92 MiB/s [2024-11-25T12:23:35.242Z] 6526.00 IOPS, 25.49 MiB/s [2024-11-25T12:23:36.175Z] 6737.33 IOPS, 26.32 MiB/s [2024-11-25T12:23:37.548Z] 6842.00 IOPS, 26.73 MiB/s [2024-11-25T12:23:38.135Z] 6910.00 IOPS, 26.99 MiB/s [2024-11-25T12:23:39.509Z] 6957.00 IOPS, 27.18 MiB/s [2024-11-25T12:23:40.445Z] 6989.57 IOPS, 27.30 MiB/s [2024-11-25T12:23:41.379Z] 7008.50 IOPS, 27.38 MiB/s [2024-11-25T12:23:42.313Z] 7017.89 IOPS, 27.41 MiB/s [2024-11-25T12:23:42.313Z] 7024.70 IOPS, 27.44 MiB/s 00:24:46.437 Latency(us) 00:24:46.437 [2024-11-25T12:23:42.313Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:46.437 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:46.437 Verification LBA range: start 0x0 length 0x2000 00:24:46.437 TLSTESTn1 : 10.01 7030.79 27.46 0.00 0.00 18177.26 3163.37 18450.90 00:24:46.437 [2024-11-25T12:23:42.313Z] =================================================================================================================== 00:24:46.437 [2024-11-25T12:23:42.313Z] Total : 7030.79 27.46 0.00 0.00 18177.26 3163.37 18450.90 00:24:46.437 { 00:24:46.437 "results": [ 00:24:46.437 { 00:24:46.437 "job": "TLSTESTn1", 00:24:46.437 "core_mask": "0x4", 00:24:46.437 "workload": "verify", 00:24:46.437 "status": "finished", 00:24:46.437 "verify_range": { 00:24:46.437 "start": 0, 00:24:46.437 "length": 8192 00:24:46.437 }, 00:24:46.437 "queue_depth": 128, 00:24:46.437 "io_size": 4096, 00:24:46.437 "runtime": 10.009256, 00:24:46.437 "iops": 7030.792298648371, 00:24:46.437 "mibps": 27.4640324165952, 00:24:46.437 "io_failed": 0, 00:24:46.437 "io_timeout": 0, 00:24:46.437 "avg_latency_us": 18177.263488772463, 00:24:46.437 "min_latency_us": 3163.372307692308, 00:24:46.437 "max_latency_us": 18450.904615384614 00:24:46.437 } 00:24:46.437 ], 00:24:46.437 "core_count": 1 00:24:46.437 } 00:24:46.437 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:46.437 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 82014 00:24:46.437 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 82014 ']' 00:24:46.437 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 82014 00:24:46.437 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:46.437 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:46.437 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82014 00:24:46.437 killing process with pid 82014 00:24:46.437 Received shutdown signal, test time was about 10.000000 seconds 00:24:46.437 00:24:46.437 Latency(us) 00:24:46.437 [2024-11-25T12:23:42.313Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:46.437 [2024-11-25T12:23:42.313Z] =================================================================================================================== 00:24:46.437 [2024-11-25T12:23:42.313Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:46.437 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:46.437 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:46.437 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82014' 00:24:46.437 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 82014 00:24:46.437 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 82014 00:24:46.437 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.klscqPGRo8 00:24:46.437 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:46.437 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.klscqPGRo8 00:24:46.437 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:24:46.437 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:46.437 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:24:46.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:46.438 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:46.438 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.klscqPGRo8 00:24:46.438 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:46.438 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:46.438 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:46.438 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.klscqPGRo8 00:24:46.438 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:46.438 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:46.438 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=82159 00:24:46.438 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:46.438 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 82159 /var/tmp/bdevperf.sock 00:24:46.438 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 82159 ']' 00:24:46.438 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:46.438 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:46.438 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:46.438 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:46.438 12:23:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:46.438 [2024-11-25 12:23:42.310918] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:24:46.438 [2024-11-25 12:23:42.311413] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82159 ] 00:24:46.696 [2024-11-25 12:23:42.445003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.696 [2024-11-25 12:23:42.476449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:47.630 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:47.630 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:47.630 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.klscqPGRo8 00:24:47.630 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:47.887 [2024-11-25 12:23:43.574467] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:47.887 [2024-11-25 12:23:43.584737] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:47.887 [2024-11-25 12:23:43.585521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0db20 (107): Transport endpoint is not connected 00:24:47.887 [2024-11-25 12:23:43.586507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0db20 (9): Bad file descriptor 00:24:47.887 [2024-11-25 12:23:43.587505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:24:47.887 [2024-11-25 12:23:43.587617] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:24:47.887 [2024-11-25 12:23:43.587704] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:24:47.887 [2024-11-25 12:23:43.587808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:24:47.887 2024/11/25 12:23:43 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:24:47.887 request: 00:24:47.887 { 00:24:47.887 "method": "bdev_nvme_attach_controller", 00:24:47.887 "params": { 00:24:47.887 "name": "TLSTEST", 00:24:47.887 "trtype": "tcp", 00:24:47.887 "traddr": "10.0.0.3", 00:24:47.887 "adrfam": "ipv4", 00:24:47.887 "trsvcid": "4420", 00:24:47.887 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:47.887 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:47.887 "prchk_reftag": false, 00:24:47.887 "prchk_guard": false, 00:24:47.887 "hdgst": false, 00:24:47.887 "ddgst": false, 00:24:47.887 "psk": "key0", 00:24:47.887 "allow_unrecognized_csi": false 00:24:47.887 } 00:24:47.887 } 00:24:47.887 Got JSON-RPC error response 00:24:47.887 GoRPCClient: error on JSON-RPC call 00:24:47.887 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 82159 00:24:47.887 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 82159 ']' 00:24:47.887 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 82159 00:24:47.887 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:47.887 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:47.887 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82159 00:24:47.887 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:47.887 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:47.887 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82159' 00:24:47.887 killing process with pid 82159 00:24:47.887 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 82159 00:24:47.887 Received shutdown signal, test time was about 10.000000 seconds 00:24:47.887 00:24:47.887 Latency(us) 00:24:47.887 [2024-11-25T12:23:43.763Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:47.887 [2024-11-25T12:23:43.763Z] =================================================================================================================== 00:24:47.887 [2024-11-25T12:23:43.763Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:47.887 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 82159 00:24:47.887 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:47.887 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:47.887 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:47.887 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:47.887 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:47.887 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.5iuyUsj1ea 00:24:47.887 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:47.887 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.5iuyUsj1ea 00:24:47.887 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:24:47.887 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:47.887 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:24:47.887 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:47.887 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.5iuyUsj1ea 00:24:47.887 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:47.887 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:47.887 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:24:47.887 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.5iuyUsj1ea 00:24:47.887 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:47.887 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=82211 00:24:47.887 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:47.887 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:47.887 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 82211 /var/tmp/bdevperf.sock 00:24:47.887 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 82211 ']' 00:24:47.887 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:47.887 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:47.887 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:47.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:47.887 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:47.887 12:23:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:48.150 [2024-11-25 12:23:43.764058] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:24:48.150 [2024-11-25 12:23:43.764115] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82211 ] 00:24:48.150 [2024-11-25 12:23:43.899932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:48.150 [2024-11-25 12:23:43.930904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:49.080 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:49.080 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:49.080 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5iuyUsj1ea 00:24:49.080 12:23:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:24:49.338 [2024-11-25 12:23:45.060369] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:49.338 [2024-11-25 12:23:45.064821] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:49.338 [2024-11-25 12:23:45.064844] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:49.338 [2024-11-25 12:23:45.064874] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:49.338 [2024-11-25 12:23:45.065439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd5bb20 (107): Transport endpoint is not connected 00:24:49.338 [2024-11-25 12:23:45.066425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd5bb20 (9): Bad file descriptor 00:24:49.338 [2024-11-25 12:23:45.067423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:24:49.338 [2024-11-25 12:23:45.067527] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:24:49.338 [2024-11-25 12:23:45.067610] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:24:49.338 [2024-11-25 12:23:45.067716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:24:49.338 2024/11/25 12:23:45 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:24:49.338 request: 00:24:49.338 { 00:24:49.338 "method": "bdev_nvme_attach_controller", 00:24:49.338 "params": { 00:24:49.338 "name": "TLSTEST", 00:24:49.338 "trtype": "tcp", 00:24:49.338 "traddr": "10.0.0.3", 00:24:49.338 "adrfam": "ipv4", 00:24:49.338 "trsvcid": "4420", 00:24:49.338 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:49.338 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:49.338 "prchk_reftag": false, 00:24:49.338 "prchk_guard": false, 00:24:49.338 "hdgst": false, 00:24:49.338 "ddgst": false, 00:24:49.338 "psk": "key0", 00:24:49.338 "allow_unrecognized_csi": false 00:24:49.338 } 00:24:49.338 } 00:24:49.338 Got JSON-RPC error response 00:24:49.338 GoRPCClient: error on JSON-RPC call 00:24:49.338 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 82211 00:24:49.338 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 82211 ']' 00:24:49.338 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 82211 00:24:49.338 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:49.338 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:49.338 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82211 00:24:49.338 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:49.338 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:49.338 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82211' 00:24:49.338 killing process with pid 82211 00:24:49.338 Received shutdown signal, test time was about 10.000000 seconds 00:24:49.338 00:24:49.338 Latency(us) 00:24:49.338 [2024-11-25T12:23:45.214Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:49.338 [2024-11-25T12:23:45.214Z] =================================================================================================================== 00:24:49.338 [2024-11-25T12:23:45.214Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:49.338 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 82211 00:24:49.338 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 82211 00:24:49.338 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:49.338 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:49.338 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:49.338 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:49.338 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:49.338 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.5iuyUsj1ea 00:24:49.338 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:49.338 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.5iuyUsj1ea 00:24:49.338 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:24:49.338 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:49.338 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:24:49.338 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:49.338 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.5iuyUsj1ea 00:24:49.338 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:49.338 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:24:49.338 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:49.338 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.5iuyUsj1ea 00:24:49.338 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:49.338 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=82264 00:24:49.338 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:49.338 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 82264 /var/tmp/bdevperf.sock 00:24:49.338 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 82264 ']' 00:24:49.338 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:49.338 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:49.338 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:49.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:49.338 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:49.338 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:49.338 12:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:49.595 [2024-11-25 12:23:45.239163] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:24:49.596 [2024-11-25 12:23:45.239225] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82264 ] 00:24:49.596 [2024-11-25 12:23:45.375116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.596 [2024-11-25 12:23:45.406370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:50.527 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:50.527 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:50.527 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5iuyUsj1ea 00:24:50.527 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:50.784 [2024-11-25 12:23:46.578436] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:50.784 [2024-11-25 12:23:46.585009] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:50.784 [2024-11-25 12:23:46.585034] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:50.784 [2024-11-25 12:23:46.585064] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:50.784 [2024-11-25 12:23:46.585472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f1fb20 (107): Transport endpoint is not connected 00:24:50.784 [2024-11-25 12:23:46.586458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f1fb20 (9): Bad file descriptor 00:24:50.784 [2024-11-25 12:23:46.587456] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:24:50.784 [2024-11-25 12:23:46.587569] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:24:50.784 [2024-11-25 12:23:46.587683] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:24:50.785 [2024-11-25 12:23:46.587742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:24:50.785 2024/11/25 12:23:46 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:24:50.785 request: 00:24:50.785 { 00:24:50.785 "method": "bdev_nvme_attach_controller", 00:24:50.785 "params": { 00:24:50.785 "name": "TLSTEST", 00:24:50.785 "trtype": "tcp", 00:24:50.785 "traddr": "10.0.0.3", 00:24:50.785 "adrfam": "ipv4", 00:24:50.785 "trsvcid": "4420", 00:24:50.785 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:50.785 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:50.785 "prchk_reftag": false, 00:24:50.785 "prchk_guard": false, 00:24:50.785 "hdgst": false, 00:24:50.785 "ddgst": false, 00:24:50.785 "psk": "key0", 00:24:50.785 "allow_unrecognized_csi": false 00:24:50.785 } 00:24:50.785 } 00:24:50.785 Got JSON-RPC error response 00:24:50.785 GoRPCClient: error on JSON-RPC call 00:24:50.785 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 82264 00:24:50.785 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 82264 ']' 00:24:50.785 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 82264 00:24:50.785 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:50.785 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:50.785 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82264 00:24:50.785 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:50.785 killing process with pid 82264 00:24:50.785 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:50.785 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82264' 00:24:50.785 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 82264 00:24:50.785 Received shutdown signal, test time was about 10.000000 seconds 00:24:50.785 00:24:50.785 Latency(us) 00:24:50.785 [2024-11-25T12:23:46.661Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:50.785 [2024-11-25T12:23:46.661Z] =================================================================================================================== 00:24:50.785 [2024-11-25T12:23:46.661Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:50.785 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 82264 00:24:51.043 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:51.043 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:51.043 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:51.043 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:51.043 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:51.043 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:51.043 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:51.043 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:51.043 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:24:51.043 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:51.043 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:24:51.043 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:51.043 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:51.043 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:51.043 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:51.043 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:51.043 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:24:51.043 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:51.043 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=82311 00:24:51.043 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:51.043 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 82311 /var/tmp/bdevperf.sock 00:24:51.043 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 82311 ']' 00:24:51.043 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:51.043 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:51.043 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:51.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:51.043 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:51.043 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:51.043 12:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:51.043 [2024-11-25 12:23:46.774243] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:24:51.043 [2024-11-25 12:23:46.774300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82311 ] 00:24:51.043 [2024-11-25 12:23:46.910716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:51.301 [2024-11-25 12:23:46.941657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:51.868 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:51.868 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:51.868 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:24:52.126 [2024-11-25 12:23:47.797548] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:24:52.126 [2024-11-25 12:23:47.797981] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:52.126 2024/11/25 12:23:47 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:24:52.127 request: 00:24:52.127 { 00:24:52.127 "method": "keyring_file_add_key", 00:24:52.127 "params": { 00:24:52.127 "name": "key0", 00:24:52.127 "path": "" 00:24:52.127 } 00:24:52.127 } 00:24:52.127 Got JSON-RPC error response 00:24:52.127 GoRPCClient: error on JSON-RPC call 00:24:52.127 12:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:52.385 [2024-11-25 12:23:48.005669] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:52.385 [2024-11-25 12:23:48.005919] bdev_nvme.c:6717:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:24:52.385 2024/11/25 12:23:48 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-126 Msg=Required key not available 00:24:52.385 request: 00:24:52.385 { 00:24:52.385 "method": "bdev_nvme_attach_controller", 00:24:52.385 "params": { 00:24:52.385 "name": "TLSTEST", 00:24:52.385 "trtype": "tcp", 00:24:52.385 "traddr": "10.0.0.3", 00:24:52.385 "adrfam": "ipv4", 00:24:52.385 "trsvcid": "4420", 00:24:52.385 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:52.385 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:52.385 "prchk_reftag": false, 00:24:52.385 "prchk_guard": false, 00:24:52.385 "hdgst": false, 00:24:52.385 "ddgst": false, 00:24:52.385 "psk": "key0", 00:24:52.385 "allow_unrecognized_csi": false 00:24:52.385 } 00:24:52.385 } 00:24:52.385 Got JSON-RPC error response 00:24:52.385 GoRPCClient: error on JSON-RPC call 00:24:52.385 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 82311 00:24:52.385 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 82311 ']' 00:24:52.385 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 82311 00:24:52.385 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:52.385 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:52.385 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82311 00:24:52.385 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:52.385 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:52.385 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82311' 00:24:52.385 killing process with pid 82311 00:24:52.385 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 82311 00:24:52.385 Received shutdown signal, test time was about 10.000000 seconds 00:24:52.385 00:24:52.385 Latency(us) 00:24:52.385 [2024-11-25T12:23:48.261Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:52.385 [2024-11-25T12:23:48.261Z] =================================================================================================================== 00:24:52.385 [2024-11-25T12:23:48.261Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:52.385 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 82311 00:24:52.385 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:52.385 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:52.385 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:52.385 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:52.385 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:52.385 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 81655 00:24:52.385 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 81655 ']' 00:24:52.385 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 81655 00:24:52.385 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:52.385 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:52.385 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81655 00:24:52.385 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:52.385 killing process with pid 81655 00:24:52.385 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:52.385 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81655' 00:24:52.385 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 81655 00:24:52.385 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 81655 00:24:52.644 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:24:52.644 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:24:52.644 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:24:52.644 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:24:52.644 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:24:52.644 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:24:52.644 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:24:52.644 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:52.644 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:24:52.644 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.zndCmKUwR3 00:24:52.644 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:52.644 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.zndCmKUwR3 00:24:52.644 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:24:52.644 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:52.644 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:52.644 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:52.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:52.644 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=82374 00:24:52.644 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:52.644 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 82374 00:24:52.644 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 82374 ']' 00:24:52.644 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:52.644 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:52.644 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:52.644 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:52.644 12:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:52.644 [2024-11-25 12:23:48.361564] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:24:52.644 [2024-11-25 12:23:48.361802] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:52.644 [2024-11-25 12:23:48.503605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.902 [2024-11-25 12:23:48.533549] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:52.902 [2024-11-25 12:23:48.533700] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:52.902 [2024-11-25 12:23:48.533757] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:52.902 [2024-11-25 12:23:48.534045] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:52.902 [2024-11-25 12:23:48.534055] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:52.902 [2024-11-25 12:23:48.534280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:53.467 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:53.467 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:53.467 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:53.467 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:53.467 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:53.467 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:53.467 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.zndCmKUwR3 00:24:53.467 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.zndCmKUwR3 00:24:53.467 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:53.724 [2024-11-25 12:23:49.411442] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:53.724 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:53.983 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:24:53.983 [2024-11-25 12:23:49.827494] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:53.983 [2024-11-25 12:23:49.827634] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:53.983 12:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:54.241 malloc0 00:24:54.241 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:54.498 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.zndCmKUwR3 00:24:54.756 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:55.015 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zndCmKUwR3 00:24:55.015 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:55.015 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:55.015 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:55.015 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.zndCmKUwR3 00:24:55.015 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:55.015 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=82483 00:24:55.015 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:55.015 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:55.015 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 82483 /var/tmp/bdevperf.sock 00:24:55.015 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 82483 ']' 00:24:55.015 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:55.015 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:55.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:55.015 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:55.015 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:55.015 12:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:55.015 [2024-11-25 12:23:50.668805] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:24:55.015 [2024-11-25 12:23:50.668870] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82483 ] 00:24:55.015 [2024-11-25 12:23:50.810723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.015 [2024-11-25 12:23:50.846399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:55.949 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:55.949 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:55.949 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zndCmKUwR3 00:24:55.949 12:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:56.207 [2024-11-25 12:23:51.927255] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:56.207 TLSTESTn1 00:24:56.207 12:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:56.464 Running I/O for 10 seconds... 00:24:58.375 6442.00 IOPS, 25.16 MiB/s [2024-11-25T12:23:55.191Z] 6784.00 IOPS, 26.50 MiB/s [2024-11-25T12:23:56.125Z] 6951.67 IOPS, 27.15 MiB/s [2024-11-25T12:23:57.518Z] 7035.50 IOPS, 27.48 MiB/s [2024-11-25T12:23:58.451Z] 7079.40 IOPS, 27.65 MiB/s [2024-11-25T12:23:59.385Z] 7084.33 IOPS, 27.67 MiB/s [2024-11-25T12:24:00.319Z] 7097.29 IOPS, 27.72 MiB/s [2024-11-25T12:24:01.253Z] 7113.88 IOPS, 27.79 MiB/s [2024-11-25T12:24:02.187Z] 7117.89 IOPS, 27.80 MiB/s [2024-11-25T12:24:02.187Z] 7133.60 IOPS, 27.87 MiB/s 00:25:06.311 Latency(us) 00:25:06.311 [2024-11-25T12:24:02.187Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:06.311 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:06.311 Verification LBA range: start 0x0 length 0x2000 00:25:06.311 TLSTESTn1 : 10.01 7139.84 27.89 0.00 0.00 17899.18 3302.01 16131.94 00:25:06.311 [2024-11-25T12:24:02.187Z] =================================================================================================================== 00:25:06.311 [2024-11-25T12:24:02.187Z] Total : 7139.84 27.89 0.00 0.00 17899.18 3302.01 16131.94 00:25:06.311 { 00:25:06.311 "results": [ 00:25:06.311 { 00:25:06.311 "job": "TLSTESTn1", 00:25:06.311 "core_mask": "0x4", 00:25:06.311 "workload": "verify", 00:25:06.311 "status": "finished", 00:25:06.311 "verify_range": { 00:25:06.311 "start": 0, 00:25:06.311 "length": 8192 00:25:06.311 }, 00:25:06.311 "queue_depth": 128, 00:25:06.311 "io_size": 4096, 00:25:06.311 "runtime": 10.009192, 00:25:06.311 "iops": 7139.837061772819, 00:25:06.311 "mibps": 27.889988522550073, 00:25:06.311 "io_failed": 0, 00:25:06.311 "io_timeout": 0, 00:25:06.311 "avg_latency_us": 17899.17998639444, 00:25:06.311 "min_latency_us": 3302.0061538461537, 00:25:06.311 "max_latency_us": 16131.938461538462 00:25:06.311 } 00:25:06.311 ], 00:25:06.311 "core_count": 1 00:25:06.311 } 00:25:06.311 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:06.311 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 82483 00:25:06.311 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 82483 ']' 00:25:06.311 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 82483 00:25:06.311 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:06.311 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:06.311 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82483 00:25:06.311 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:06.311 killing process with pid 82483 00:25:06.311 Received shutdown signal, test time was about 10.000000 seconds 00:25:06.311 00:25:06.311 Latency(us) 00:25:06.311 [2024-11-25T12:24:02.187Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:06.311 [2024-11-25T12:24:02.187Z] =================================================================================================================== 00:25:06.311 [2024-11-25T12:24:02.187Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:06.311 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:06.311 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82483' 00:25:06.311 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 82483 00:25:06.311 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 82483 00:25:06.569 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.zndCmKUwR3 00:25:06.569 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zndCmKUwR3 00:25:06.569 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:25:06.569 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zndCmKUwR3 00:25:06.569 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:25:06.569 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:06.569 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:25:06.569 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:06.569 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.zndCmKUwR3 00:25:06.569 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:06.569 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:06.569 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:06.569 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.zndCmKUwR3 00:25:06.569 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:06.569 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=82643 00:25:06.569 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:06.569 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 82643 /var/tmp/bdevperf.sock 00:25:06.569 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 82643 ']' 00:25:06.569 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:06.569 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:06.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:06.569 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:06.569 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:06.569 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:06.569 12:24:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:06.569 [2024-11-25 12:24:02.299406] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:25:06.569 [2024-11-25 12:24:02.299471] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82643 ] 00:25:06.569 [2024-11-25 12:24:02.435585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.828 [2024-11-25 12:24:02.466990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:07.419 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:07.419 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:07.419 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zndCmKUwR3 00:25:07.723 [2024-11-25 12:24:03.360769] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.zndCmKUwR3': 0100666 00:25:07.723 [2024-11-25 12:24:03.361054] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:25:07.723 2024/11/25 12:24:03 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.zndCmKUwR3], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:25:07.723 request: 00:25:07.723 { 00:25:07.723 "method": "keyring_file_add_key", 00:25:07.723 "params": { 00:25:07.723 "name": "key0", 00:25:07.723 "path": "/tmp/tmp.zndCmKUwR3" 00:25:07.723 } 00:25:07.723 } 00:25:07.723 Got JSON-RPC error response 00:25:07.723 GoRPCClient: error on JSON-RPC call 00:25:07.723 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:07.724 [2024-11-25 12:24:03.564877] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:07.724 [2024-11-25 12:24:03.564973] bdev_nvme.c:6717:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:25:07.724 2024/11/25 12:24:03 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-126 Msg=Required key not available 00:25:07.724 request: 00:25:07.724 { 00:25:07.724 "method": "bdev_nvme_attach_controller", 00:25:07.724 "params": { 00:25:07.724 "name": "TLSTEST", 00:25:07.724 "trtype": "tcp", 00:25:07.724 "traddr": "10.0.0.3", 00:25:07.724 "adrfam": "ipv4", 00:25:07.724 "trsvcid": "4420", 00:25:07.724 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:07.724 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:07.724 "prchk_reftag": false, 00:25:07.724 "prchk_guard": false, 00:25:07.724 "hdgst": false, 00:25:07.724 "ddgst": false, 00:25:07.724 "psk": "key0", 00:25:07.724 "allow_unrecognized_csi": false 00:25:07.724 } 00:25:07.724 } 00:25:07.724 Got JSON-RPC error response 00:25:07.724 GoRPCClient: error on JSON-RPC call 00:25:07.724 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 82643 00:25:07.724 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 82643 ']' 00:25:07.724 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 82643 00:25:07.724 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:07.724 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:07.724 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82643 00:25:07.983 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:07.983 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:07.983 killing process with pid 82643 00:25:07.983 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82643' 00:25:07.983 Received shutdown signal, test time was about 10.000000 seconds 00:25:07.983 00:25:07.983 Latency(us) 00:25:07.983 [2024-11-25T12:24:03.859Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.983 [2024-11-25T12:24:03.859Z] =================================================================================================================== 00:25:07.983 [2024-11-25T12:24:03.859Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:07.983 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 82643 00:25:07.983 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 82643 00:25:07.983 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:25:07.983 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:25:07.983 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:07.983 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:07.983 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:07.983 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 82374 00:25:07.983 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 82374 ']' 00:25:07.983 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 82374 00:25:07.983 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:07.983 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:07.983 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82374 00:25:07.983 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:07.983 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:07.983 killing process with pid 82374 00:25:07.983 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82374' 00:25:07.983 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 82374 00:25:07.983 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 82374 00:25:07.983 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:25:07.983 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:07.983 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:07.983 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:07.983 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=82700 00:25:07.983 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 82700 00:25:07.983 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 82700 ']' 00:25:07.983 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:07.983 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:07.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:07.983 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:07.983 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:07.983 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:07.983 12:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:08.242 [2024-11-25 12:24:03.876562] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:25:08.242 [2024-11-25 12:24:03.876658] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:08.242 [2024-11-25 12:24:04.019614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.242 [2024-11-25 12:24:04.050277] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:08.242 [2024-11-25 12:24:04.050315] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:08.242 [2024-11-25 12:24:04.050320] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:08.242 [2024-11-25 12:24:04.050324] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:08.242 [2024-11-25 12:24:04.050327] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:08.242 [2024-11-25 12:24:04.050544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:09.176 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:09.176 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:09.176 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:09.176 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:09.176 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:09.176 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:09.176 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.zndCmKUwR3 00:25:09.176 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:25:09.176 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.zndCmKUwR3 00:25:09.176 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:25:09.176 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:09.176 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:25:09.176 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:09.176 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.zndCmKUwR3 00:25:09.176 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.zndCmKUwR3 00:25:09.176 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:09.176 [2024-11-25 12:24:04.916242] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:09.176 12:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:09.433 12:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:25:09.691 [2024-11-25 12:24:05.320289] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:09.691 [2024-11-25 12:24:05.320430] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:09.691 12:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:09.691 malloc0 00:25:09.691 12:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:09.949 12:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.zndCmKUwR3 00:25:10.208 [2024-11-25 12:24:05.890372] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.zndCmKUwR3': 0100666 00:25:10.208 [2024-11-25 12:24:05.890399] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:25:10.208 2024/11/25 12:24:05 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.zndCmKUwR3], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:25:10.208 request: 00:25:10.208 { 00:25:10.208 "method": "keyring_file_add_key", 00:25:10.208 "params": { 00:25:10.208 "name": "key0", 00:25:10.208 "path": "/tmp/tmp.zndCmKUwR3" 00:25:10.208 } 00:25:10.208 } 00:25:10.208 Got JSON-RPC error response 00:25:10.208 GoRPCClient: error on JSON-RPC call 00:25:10.208 12:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:25:10.467 [2024-11-25 12:24:06.094454] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:25:10.467 [2024-11-25 12:24:06.094621] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:25:10.467 2024/11/25 12:24:06 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:key0], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:25:10.467 request: 00:25:10.467 { 00:25:10.467 "method": "nvmf_subsystem_add_host", 00:25:10.467 "params": { 00:25:10.467 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:10.467 "host": "nqn.2016-06.io.spdk:host1", 00:25:10.467 "psk": "key0" 00:25:10.467 } 00:25:10.467 } 00:25:10.467 Got JSON-RPC error response 00:25:10.467 GoRPCClient: error on JSON-RPC call 00:25:10.467 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:25:10.467 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:10.467 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:10.467 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:10.467 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 82700 00:25:10.467 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 82700 ']' 00:25:10.467 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 82700 00:25:10.467 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:10.467 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:10.467 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82700 00:25:10.467 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:10.467 killing process with pid 82700 00:25:10.467 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:10.467 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82700' 00:25:10.467 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 82700 00:25:10.467 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 82700 00:25:10.467 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.zndCmKUwR3 00:25:10.467 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:25:10.467 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:10.467 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:10.467 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:10.467 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=82817 00:25:10.467 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 82817 00:25:10.467 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 82817 ']' 00:25:10.467 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:10.467 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:10.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:10.467 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:10.467 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:10.467 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:10.467 12:24:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:10.467 [2024-11-25 12:24:06.284726] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:25:10.467 [2024-11-25 12:24:06.284804] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:10.726 [2024-11-25 12:24:06.426522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.726 [2024-11-25 12:24:06.460448] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:10.726 [2024-11-25 12:24:06.460490] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:10.726 [2024-11-25 12:24:06.460496] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:10.726 [2024-11-25 12:24:06.460501] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:10.726 [2024-11-25 12:24:06.460506] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:10.726 [2024-11-25 12:24:06.460766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:11.293 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:11.294 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:11.294 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:11.294 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:11.294 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:11.294 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:11.294 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.zndCmKUwR3 00:25:11.294 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.zndCmKUwR3 00:25:11.294 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:11.552 [2024-11-25 12:24:07.336221] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:11.552 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:11.810 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:25:12.068 [2024-11-25 12:24:07.732252] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:12.069 [2024-11-25 12:24:07.732390] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:12.069 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:12.069 malloc0 00:25:12.327 12:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:12.327 12:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.zndCmKUwR3 00:25:12.585 12:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:25:12.844 12:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:12.844 12:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=82922 00:25:12.844 12:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:12.844 12:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 82922 /var/tmp/bdevperf.sock 00:25:12.844 12:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 82922 ']' 00:25:12.844 12:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:12.844 12:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:12.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:12.844 12:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:12.844 12:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:12.844 12:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:12.844 [2024-11-25 12:24:08.563814] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:25:12.844 [2024-11-25 12:24:08.563866] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82922 ] 00:25:12.844 [2024-11-25 12:24:08.695971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.137 [2024-11-25 12:24:08.732144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:13.137 12:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:13.137 12:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:13.137 12:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zndCmKUwR3 00:25:13.396 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:13.396 [2024-11-25 12:24:09.160899] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:13.396 TLSTESTn1 00:25:13.396 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:25:13.654 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:25:13.654 "subsystems": [ 00:25:13.654 { 00:25:13.654 "subsystem": "keyring", 00:25:13.654 "config": [ 00:25:13.654 { 00:25:13.654 "method": "keyring_file_add_key", 00:25:13.654 "params": { 00:25:13.654 "name": "key0", 00:25:13.654 "path": "/tmp/tmp.zndCmKUwR3" 00:25:13.654 } 00:25:13.654 } 00:25:13.654 ] 00:25:13.654 }, 00:25:13.654 { 00:25:13.655 "subsystem": "iobuf", 00:25:13.655 "config": [ 00:25:13.655 { 00:25:13.655 "method": "iobuf_set_options", 00:25:13.655 "params": { 00:25:13.655 "enable_numa": false, 00:25:13.655 "large_bufsize": 135168, 00:25:13.655 "large_pool_count": 1024, 00:25:13.655 "small_bufsize": 8192, 00:25:13.655 "small_pool_count": 8192 00:25:13.655 } 00:25:13.655 } 00:25:13.655 ] 00:25:13.655 }, 00:25:13.655 { 00:25:13.655 "subsystem": "sock", 00:25:13.655 "config": [ 00:25:13.655 { 00:25:13.655 "method": "sock_set_default_impl", 00:25:13.655 "params": { 00:25:13.655 "impl_name": "posix" 00:25:13.655 } 00:25:13.655 }, 00:25:13.655 { 00:25:13.655 "method": "sock_impl_set_options", 00:25:13.655 "params": { 00:25:13.655 "enable_ktls": false, 00:25:13.655 "enable_placement_id": 0, 00:25:13.655 "enable_quickack": false, 00:25:13.655 "enable_recv_pipe": true, 00:25:13.655 "enable_zerocopy_send_client": false, 00:25:13.655 "enable_zerocopy_send_server": true, 00:25:13.655 "impl_name": "ssl", 00:25:13.655 "recv_buf_size": 4096, 00:25:13.655 "send_buf_size": 4096, 00:25:13.655 "tls_version": 0, 00:25:13.655 "zerocopy_threshold": 0 00:25:13.655 } 00:25:13.655 }, 00:25:13.655 { 00:25:13.655 "method": "sock_impl_set_options", 00:25:13.655 "params": { 00:25:13.655 "enable_ktls": false, 00:25:13.655 "enable_placement_id": 0, 00:25:13.655 "enable_quickack": false, 00:25:13.655 "enable_recv_pipe": true, 00:25:13.655 "enable_zerocopy_send_client": false, 00:25:13.655 "enable_zerocopy_send_server": true, 00:25:13.655 "impl_name": "posix", 00:25:13.655 "recv_buf_size": 2097152, 00:25:13.655 "send_buf_size": 2097152, 00:25:13.655 "tls_version": 0, 00:25:13.655 "zerocopy_threshold": 0 00:25:13.655 } 00:25:13.655 } 00:25:13.655 ] 00:25:13.655 }, 00:25:13.655 { 00:25:13.655 "subsystem": "vmd", 00:25:13.655 "config": [] 00:25:13.655 }, 00:25:13.655 { 00:25:13.655 "subsystem": "accel", 00:25:13.655 "config": [ 00:25:13.655 { 00:25:13.655 "method": "accel_set_options", 00:25:13.655 "params": { 00:25:13.655 "buf_count": 2048, 00:25:13.655 "large_cache_size": 16, 00:25:13.655 "sequence_count": 2048, 00:25:13.655 "small_cache_size": 128, 00:25:13.655 "task_count": 2048 00:25:13.655 } 00:25:13.655 } 00:25:13.655 ] 00:25:13.655 }, 00:25:13.655 { 00:25:13.655 "subsystem": "bdev", 00:25:13.655 "config": [ 00:25:13.655 { 00:25:13.655 "method": "bdev_set_options", 00:25:13.655 "params": { 00:25:13.655 "bdev_auto_examine": true, 00:25:13.655 "bdev_io_cache_size": 256, 00:25:13.655 "bdev_io_pool_size": 65535, 00:25:13.655 "iobuf_large_cache_size": 16, 00:25:13.655 "iobuf_small_cache_size": 128 00:25:13.655 } 00:25:13.655 }, 00:25:13.655 { 00:25:13.655 "method": "bdev_raid_set_options", 00:25:13.655 "params": { 00:25:13.655 "process_max_bandwidth_mb_sec": 0, 00:25:13.655 "process_window_size_kb": 1024 00:25:13.655 } 00:25:13.655 }, 00:25:13.655 { 00:25:13.655 "method": "bdev_iscsi_set_options", 00:25:13.655 "params": { 00:25:13.655 "timeout_sec": 30 00:25:13.655 } 00:25:13.655 }, 00:25:13.655 { 00:25:13.655 "method": "bdev_nvme_set_options", 00:25:13.655 "params": { 00:25:13.655 "action_on_timeout": "none", 00:25:13.655 "allow_accel_sequence": false, 00:25:13.655 "arbitration_burst": 0, 00:25:13.655 "bdev_retry_count": 3, 00:25:13.655 "ctrlr_loss_timeout_sec": 0, 00:25:13.655 "delay_cmd_submit": true, 00:25:13.655 "dhchap_dhgroups": [ 00:25:13.655 "null", 00:25:13.655 "ffdhe2048", 00:25:13.655 "ffdhe3072", 00:25:13.655 "ffdhe4096", 00:25:13.655 "ffdhe6144", 00:25:13.655 "ffdhe8192" 00:25:13.655 ], 00:25:13.655 "dhchap_digests": [ 00:25:13.655 "sha256", 00:25:13.655 "sha384", 00:25:13.655 "sha512" 00:25:13.655 ], 00:25:13.655 "disable_auto_failback": false, 00:25:13.655 "fast_io_fail_timeout_sec": 0, 00:25:13.655 "generate_uuids": false, 00:25:13.655 "high_priority_weight": 0, 00:25:13.655 "io_path_stat": false, 00:25:13.655 "io_queue_requests": 0, 00:25:13.655 "keep_alive_timeout_ms": 10000, 00:25:13.655 "low_priority_weight": 0, 00:25:13.655 "medium_priority_weight": 0, 00:25:13.655 "nvme_adminq_poll_period_us": 10000, 00:25:13.655 "nvme_error_stat": false, 00:25:13.655 "nvme_ioq_poll_period_us": 0, 00:25:13.655 "rdma_cm_event_timeout_ms": 0, 00:25:13.655 "rdma_max_cq_size": 0, 00:25:13.655 "rdma_srq_size": 0, 00:25:13.655 "reconnect_delay_sec": 0, 00:25:13.655 "timeout_admin_us": 0, 00:25:13.655 "timeout_us": 0, 00:25:13.655 "transport_ack_timeout": 0, 00:25:13.655 "transport_retry_count": 4, 00:25:13.655 "transport_tos": 0 00:25:13.655 } 00:25:13.655 }, 00:25:13.655 { 00:25:13.655 "method": "bdev_nvme_set_hotplug", 00:25:13.655 "params": { 00:25:13.655 "enable": false, 00:25:13.655 "period_us": 100000 00:25:13.655 } 00:25:13.655 }, 00:25:13.655 { 00:25:13.655 "method": "bdev_malloc_create", 00:25:13.655 "params": { 00:25:13.655 "block_size": 4096, 00:25:13.655 "dif_is_head_of_md": false, 00:25:13.655 "dif_pi_format": 0, 00:25:13.655 "dif_type": 0, 00:25:13.655 "md_size": 0, 00:25:13.655 "name": "malloc0", 00:25:13.655 "num_blocks": 8192, 00:25:13.655 "optimal_io_boundary": 0, 00:25:13.655 "physical_block_size": 4096, 00:25:13.655 "uuid": "f1bd2d89-9f63-4baa-9d25-00ff15c5b147" 00:25:13.655 } 00:25:13.655 }, 00:25:13.655 { 00:25:13.655 "method": "bdev_wait_for_examine" 00:25:13.656 } 00:25:13.656 ] 00:25:13.656 }, 00:25:13.656 { 00:25:13.656 "subsystem": "nbd", 00:25:13.656 "config": [] 00:25:13.656 }, 00:25:13.656 { 00:25:13.656 "subsystem": "scheduler", 00:25:13.656 "config": [ 00:25:13.656 { 00:25:13.656 "method": "framework_set_scheduler", 00:25:13.656 "params": { 00:25:13.656 "name": "static" 00:25:13.656 } 00:25:13.656 } 00:25:13.656 ] 00:25:13.656 }, 00:25:13.656 { 00:25:13.656 "subsystem": "nvmf", 00:25:13.656 "config": [ 00:25:13.656 { 00:25:13.656 "method": "nvmf_set_config", 00:25:13.656 "params": { 00:25:13.656 "admin_cmd_passthru": { 00:25:13.656 "identify_ctrlr": false 00:25:13.656 }, 00:25:13.656 "dhchap_dhgroups": [ 00:25:13.656 "null", 00:25:13.656 "ffdhe2048", 00:25:13.656 "ffdhe3072", 00:25:13.656 "ffdhe4096", 00:25:13.656 "ffdhe6144", 00:25:13.656 "ffdhe8192" 00:25:13.656 ], 00:25:13.656 "dhchap_digests": [ 00:25:13.656 "sha256", 00:25:13.656 "sha384", 00:25:13.656 "sha512" 00:25:13.656 ], 00:25:13.656 "discovery_filter": "match_any" 00:25:13.656 } 00:25:13.656 }, 00:25:13.656 { 00:25:13.656 "method": "nvmf_set_max_subsystems", 00:25:13.656 "params": { 00:25:13.656 "max_subsystems": 1024 00:25:13.656 } 00:25:13.656 }, 00:25:13.656 { 00:25:13.656 "method": "nvmf_set_crdt", 00:25:13.656 "params": { 00:25:13.656 "crdt1": 0, 00:25:13.656 "crdt2": 0, 00:25:13.656 "crdt3": 0 00:25:13.656 } 00:25:13.656 }, 00:25:13.656 { 00:25:13.656 "method": "nvmf_create_transport", 00:25:13.656 "params": { 00:25:13.656 "abort_timeout_sec": 1, 00:25:13.656 "ack_timeout": 0, 00:25:13.656 "buf_cache_size": 4294967295, 00:25:13.656 "c2h_success": false, 00:25:13.656 "data_wr_pool_size": 0, 00:25:13.656 "dif_insert_or_strip": false, 00:25:13.656 "in_capsule_data_size": 4096, 00:25:13.656 "io_unit_size": 131072, 00:25:13.656 "max_aq_depth": 128, 00:25:13.656 "max_io_qpairs_per_ctrlr": 127, 00:25:13.656 "max_io_size": 131072, 00:25:13.656 "max_queue_depth": 128, 00:25:13.656 "num_shared_buffers": 511, 00:25:13.656 "sock_priority": 0, 00:25:13.656 "trtype": "TCP", 00:25:13.656 "zcopy": false 00:25:13.656 } 00:25:13.656 }, 00:25:13.656 { 00:25:13.656 "method": "nvmf_create_subsystem", 00:25:13.656 "params": { 00:25:13.656 "allow_any_host": false, 00:25:13.656 "ana_reporting": false, 00:25:13.656 "max_cntlid": 65519, 00:25:13.656 "max_namespaces": 10, 00:25:13.656 "min_cntlid": 1, 00:25:13.656 "model_number": "SPDK bdev Controller", 00:25:13.656 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:13.656 "serial_number": "SPDK00000000000001" 00:25:13.656 } 00:25:13.656 }, 00:25:13.656 { 00:25:13.656 "method": "nvmf_subsystem_add_host", 00:25:13.656 "params": { 00:25:13.656 "host": "nqn.2016-06.io.spdk:host1", 00:25:13.656 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:13.656 "psk": "key0" 00:25:13.656 } 00:25:13.656 }, 00:25:13.656 { 00:25:13.656 "method": "nvmf_subsystem_add_ns", 00:25:13.656 "params": { 00:25:13.656 "namespace": { 00:25:13.656 "bdev_name": "malloc0", 00:25:13.656 "nguid": "F1BD2D899F634BAA9D2500FF15C5B147", 00:25:13.656 "no_auto_visible": false, 00:25:13.656 "nsid": 1, 00:25:13.656 "uuid": "f1bd2d89-9f63-4baa-9d25-00ff15c5b147" 00:25:13.656 }, 00:25:13.656 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:25:13.656 } 00:25:13.656 }, 00:25:13.656 { 00:25:13.656 "method": "nvmf_subsystem_add_listener", 00:25:13.656 "params": { 00:25:13.656 "listen_address": { 00:25:13.656 "adrfam": "IPv4", 00:25:13.656 "traddr": "10.0.0.3", 00:25:13.656 "trsvcid": "4420", 00:25:13.656 "trtype": "TCP" 00:25:13.656 }, 00:25:13.656 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:13.656 "secure_channel": true 00:25:13.656 } 00:25:13.656 } 00:25:13.656 ] 00:25:13.656 } 00:25:13.656 ] 00:25:13.656 }' 00:25:13.915 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:25:14.173 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:25:14.173 "subsystems": [ 00:25:14.173 { 00:25:14.173 "subsystem": "keyring", 00:25:14.173 "config": [ 00:25:14.173 { 00:25:14.173 "method": "keyring_file_add_key", 00:25:14.173 "params": { 00:25:14.173 "name": "key0", 00:25:14.173 "path": "/tmp/tmp.zndCmKUwR3" 00:25:14.173 } 00:25:14.173 } 00:25:14.173 ] 00:25:14.173 }, 00:25:14.173 { 00:25:14.173 "subsystem": "iobuf", 00:25:14.173 "config": [ 00:25:14.173 { 00:25:14.173 "method": "iobuf_set_options", 00:25:14.173 "params": { 00:25:14.173 "enable_numa": false, 00:25:14.173 "large_bufsize": 135168, 00:25:14.173 "large_pool_count": 1024, 00:25:14.173 "small_bufsize": 8192, 00:25:14.173 "small_pool_count": 8192 00:25:14.173 } 00:25:14.173 } 00:25:14.173 ] 00:25:14.173 }, 00:25:14.173 { 00:25:14.173 "subsystem": "sock", 00:25:14.173 "config": [ 00:25:14.173 { 00:25:14.173 "method": "sock_set_default_impl", 00:25:14.173 "params": { 00:25:14.173 "impl_name": "posix" 00:25:14.173 } 00:25:14.173 }, 00:25:14.173 { 00:25:14.173 "method": "sock_impl_set_options", 00:25:14.173 "params": { 00:25:14.173 "enable_ktls": false, 00:25:14.173 "enable_placement_id": 0, 00:25:14.173 "enable_quickack": false, 00:25:14.173 "enable_recv_pipe": true, 00:25:14.173 "enable_zerocopy_send_client": false, 00:25:14.173 "enable_zerocopy_send_server": true, 00:25:14.173 "impl_name": "ssl", 00:25:14.173 "recv_buf_size": 4096, 00:25:14.173 "send_buf_size": 4096, 00:25:14.173 "tls_version": 0, 00:25:14.173 "zerocopy_threshold": 0 00:25:14.173 } 00:25:14.173 }, 00:25:14.173 { 00:25:14.173 "method": "sock_impl_set_options", 00:25:14.173 "params": { 00:25:14.173 "enable_ktls": false, 00:25:14.173 "enable_placement_id": 0, 00:25:14.173 "enable_quickack": false, 00:25:14.173 "enable_recv_pipe": true, 00:25:14.173 "enable_zerocopy_send_client": false, 00:25:14.173 "enable_zerocopy_send_server": true, 00:25:14.173 "impl_name": "posix", 00:25:14.173 "recv_buf_size": 2097152, 00:25:14.173 "send_buf_size": 2097152, 00:25:14.173 "tls_version": 0, 00:25:14.173 "zerocopy_threshold": 0 00:25:14.173 } 00:25:14.173 } 00:25:14.173 ] 00:25:14.173 }, 00:25:14.173 { 00:25:14.173 "subsystem": "vmd", 00:25:14.173 "config": [] 00:25:14.173 }, 00:25:14.173 { 00:25:14.173 "subsystem": "accel", 00:25:14.173 "config": [ 00:25:14.173 { 00:25:14.173 "method": "accel_set_options", 00:25:14.173 "params": { 00:25:14.173 "buf_count": 2048, 00:25:14.173 "large_cache_size": 16, 00:25:14.173 "sequence_count": 2048, 00:25:14.173 "small_cache_size": 128, 00:25:14.173 "task_count": 2048 00:25:14.173 } 00:25:14.173 } 00:25:14.173 ] 00:25:14.173 }, 00:25:14.173 { 00:25:14.173 "subsystem": "bdev", 00:25:14.173 "config": [ 00:25:14.173 { 00:25:14.173 "method": "bdev_set_options", 00:25:14.173 "params": { 00:25:14.173 "bdev_auto_examine": true, 00:25:14.173 "bdev_io_cache_size": 256, 00:25:14.173 "bdev_io_pool_size": 65535, 00:25:14.173 "iobuf_large_cache_size": 16, 00:25:14.173 "iobuf_small_cache_size": 128 00:25:14.173 } 00:25:14.173 }, 00:25:14.173 { 00:25:14.173 "method": "bdev_raid_set_options", 00:25:14.173 "params": { 00:25:14.173 "process_max_bandwidth_mb_sec": 0, 00:25:14.173 "process_window_size_kb": 1024 00:25:14.173 } 00:25:14.173 }, 00:25:14.173 { 00:25:14.173 "method": "bdev_iscsi_set_options", 00:25:14.173 "params": { 00:25:14.173 "timeout_sec": 30 00:25:14.173 } 00:25:14.173 }, 00:25:14.173 { 00:25:14.173 "method": "bdev_nvme_set_options", 00:25:14.173 "params": { 00:25:14.173 "action_on_timeout": "none", 00:25:14.173 "allow_accel_sequence": false, 00:25:14.173 "arbitration_burst": 0, 00:25:14.173 "bdev_retry_count": 3, 00:25:14.173 "ctrlr_loss_timeout_sec": 0, 00:25:14.173 "delay_cmd_submit": true, 00:25:14.173 "dhchap_dhgroups": [ 00:25:14.173 "null", 00:25:14.173 "ffdhe2048", 00:25:14.173 "ffdhe3072", 00:25:14.173 "ffdhe4096", 00:25:14.173 "ffdhe6144", 00:25:14.173 "ffdhe8192" 00:25:14.173 ], 00:25:14.173 "dhchap_digests": [ 00:25:14.173 "sha256", 00:25:14.173 "sha384", 00:25:14.173 "sha512" 00:25:14.173 ], 00:25:14.173 "disable_auto_failback": false, 00:25:14.173 "fast_io_fail_timeout_sec": 0, 00:25:14.173 "generate_uuids": false, 00:25:14.173 "high_priority_weight": 0, 00:25:14.173 "io_path_stat": false, 00:25:14.173 "io_queue_requests": 512, 00:25:14.173 "keep_alive_timeout_ms": 10000, 00:25:14.173 "low_priority_weight": 0, 00:25:14.173 "medium_priority_weight": 0, 00:25:14.173 "nvme_adminq_poll_period_us": 10000, 00:25:14.173 "nvme_error_stat": false, 00:25:14.173 "nvme_ioq_poll_period_us": 0, 00:25:14.173 "rdma_cm_event_timeout_ms": 0, 00:25:14.173 "rdma_max_cq_size": 0, 00:25:14.173 "rdma_srq_size": 0, 00:25:14.173 "reconnect_delay_sec": 0, 00:25:14.173 "timeout_admin_us": 0, 00:25:14.173 "timeout_us": 0, 00:25:14.173 "transport_ack_timeout": 0, 00:25:14.173 "transport_retry_count": 4, 00:25:14.173 "transport_tos": 0 00:25:14.173 } 00:25:14.173 }, 00:25:14.173 { 00:25:14.173 "method": "bdev_nvme_attach_controller", 00:25:14.173 "params": { 00:25:14.173 "adrfam": "IPv4", 00:25:14.173 "ctrlr_loss_timeout_sec": 0, 00:25:14.173 "ddgst": false, 00:25:14.173 "fast_io_fail_timeout_sec": 0, 00:25:14.173 "hdgst": false, 00:25:14.173 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:14.173 "multipath": "multipath", 00:25:14.173 "name": "TLSTEST", 00:25:14.173 "prchk_guard": false, 00:25:14.173 "prchk_reftag": false, 00:25:14.173 "psk": "key0", 00:25:14.173 "reconnect_delay_sec": 0, 00:25:14.173 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:14.173 "traddr": "10.0.0.3", 00:25:14.173 "trsvcid": "4420", 00:25:14.173 "trtype": "TCP" 00:25:14.173 } 00:25:14.173 }, 00:25:14.174 { 00:25:14.174 "method": "bdev_nvme_set_hotplug", 00:25:14.174 "params": { 00:25:14.174 "enable": false, 00:25:14.174 "period_us": 100000 00:25:14.174 } 00:25:14.174 }, 00:25:14.174 { 00:25:14.174 "method": "bdev_wait_for_examine" 00:25:14.174 } 00:25:14.174 ] 00:25:14.174 }, 00:25:14.174 { 00:25:14.174 "subsystem": "nbd", 00:25:14.174 "config": [] 00:25:14.174 } 00:25:14.174 ] 00:25:14.174 }' 00:25:14.174 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 82922 00:25:14.174 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 82922 ']' 00:25:14.174 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 82922 00:25:14.174 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:14.174 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:14.174 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82922 00:25:14.174 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:14.174 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:14.174 killing process with pid 82922 00:25:14.174 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82922' 00:25:14.174 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 82922 00:25:14.174 Received shutdown signal, test time was about 10.000000 seconds 00:25:14.174 00:25:14.174 Latency(us) 00:25:14.174 [2024-11-25T12:24:10.050Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:14.174 [2024-11-25T12:24:10.050Z] =================================================================================================================== 00:25:14.174 [2024-11-25T12:24:10.050Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:14.174 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 82922 00:25:14.174 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 82817 00:25:14.174 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 82817 ']' 00:25:14.174 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 82817 00:25:14.174 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:14.174 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:14.174 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82817 00:25:14.174 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:14.174 killing process with pid 82817 00:25:14.174 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:14.174 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82817' 00:25:14.174 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 82817 00:25:14.174 12:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 82817 00:25:14.431 12:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:25:14.431 12:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:14.431 12:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:14.431 12:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:14.431 12:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:25:14.431 "subsystems": [ 00:25:14.431 { 00:25:14.431 "subsystem": "keyring", 00:25:14.431 "config": [ 00:25:14.431 { 00:25:14.431 "method": "keyring_file_add_key", 00:25:14.431 "params": { 00:25:14.431 "name": "key0", 00:25:14.431 "path": "/tmp/tmp.zndCmKUwR3" 00:25:14.431 } 00:25:14.431 } 00:25:14.431 ] 00:25:14.431 }, 00:25:14.431 { 00:25:14.431 "subsystem": "iobuf", 00:25:14.431 "config": [ 00:25:14.431 { 00:25:14.431 "method": "iobuf_set_options", 00:25:14.431 "params": { 00:25:14.431 "enable_numa": false, 00:25:14.431 "large_bufsize": 135168, 00:25:14.431 "large_pool_count": 1024, 00:25:14.431 "small_bufsize": 8192, 00:25:14.431 "small_pool_count": 8192 00:25:14.431 } 00:25:14.431 } 00:25:14.431 ] 00:25:14.431 }, 00:25:14.431 { 00:25:14.431 "subsystem": "sock", 00:25:14.431 "config": [ 00:25:14.431 { 00:25:14.431 "method": "sock_set_default_impl", 00:25:14.431 "params": { 00:25:14.431 "impl_name": "posix" 00:25:14.431 } 00:25:14.431 }, 00:25:14.431 { 00:25:14.431 "method": "sock_impl_set_options", 00:25:14.431 "params": { 00:25:14.431 "enable_ktls": false, 00:25:14.431 "enable_placement_id": 0, 00:25:14.431 "enable_quickack": false, 00:25:14.431 "enable_recv_pipe": true, 00:25:14.431 "enable_zerocopy_send_client": false, 00:25:14.431 "enable_zerocopy_send_server": true, 00:25:14.431 "impl_name": "ssl", 00:25:14.431 "recv_buf_size": 4096, 00:25:14.431 "send_buf_size": 4096, 00:25:14.431 "tls_version": 0, 00:25:14.431 "zerocopy_threshold": 0 00:25:14.431 } 00:25:14.431 }, 00:25:14.431 { 00:25:14.431 "method": "sock_impl_set_options", 00:25:14.431 "params": { 00:25:14.431 "enable_ktls": false, 00:25:14.431 "enable_placement_id": 0, 00:25:14.431 "enable_quickack": false, 00:25:14.431 "enable_recv_pipe": true, 00:25:14.431 "enable_zerocopy_send_client": false, 00:25:14.431 "enable_zerocopy_send_server": true, 00:25:14.431 "impl_name": "posix", 00:25:14.431 "recv_buf_size": 2097152, 00:25:14.431 "send_buf_size": 2097152, 00:25:14.431 "tls_version": 0, 00:25:14.431 "zerocopy_threshold": 0 00:25:14.431 } 00:25:14.431 } 00:25:14.431 ] 00:25:14.431 }, 00:25:14.431 { 00:25:14.431 "subsystem": "vmd", 00:25:14.431 "config": [] 00:25:14.431 }, 00:25:14.431 { 00:25:14.431 "subsystem": "accel", 00:25:14.431 "config": [ 00:25:14.431 { 00:25:14.431 "method": "accel_set_options", 00:25:14.431 "params": { 00:25:14.431 "buf_count": 2048, 00:25:14.431 "large_cache_size": 16, 00:25:14.431 "sequence_count": 2048, 00:25:14.431 "small_cache_size": 128, 00:25:14.431 "task_count": 2048 00:25:14.431 } 00:25:14.431 } 00:25:14.431 ] 00:25:14.431 }, 00:25:14.431 { 00:25:14.431 "subsystem": "bdev", 00:25:14.431 "config": [ 00:25:14.431 { 00:25:14.431 "method": "bdev_set_options", 00:25:14.431 "params": { 00:25:14.431 "bdev_auto_examine": true, 00:25:14.431 "bdev_io_cache_size": 256, 00:25:14.431 "bdev_io_pool_size": 65535, 00:25:14.431 "iobuf_large_cache_size": 16, 00:25:14.431 "iobuf_small_cache_size": 128 00:25:14.431 } 00:25:14.431 }, 00:25:14.431 { 00:25:14.431 "method": "bdev_raid_set_options", 00:25:14.431 "params": { 00:25:14.431 "process_max_bandwidth_mb_sec": 0, 00:25:14.431 "process_window_size_kb": 1024 00:25:14.431 } 00:25:14.431 }, 00:25:14.431 { 00:25:14.431 "method": "bdev_iscsi_set_options", 00:25:14.432 "params": { 00:25:14.432 "timeout_sec": 30 00:25:14.432 } 00:25:14.432 }, 00:25:14.432 { 00:25:14.432 "method": "bdev_nvme_set_options", 00:25:14.432 "params": { 00:25:14.432 "action_on_timeout": "none", 00:25:14.432 "allow_accel_sequence": false, 00:25:14.432 "arbitration_burst": 0, 00:25:14.432 "bdev_retry_count": 3, 00:25:14.432 "ctrlr_loss_timeout_sec": 0, 00:25:14.432 "delay_cmd_submit": true, 00:25:14.432 "dhchap_dhgroups": [ 00:25:14.432 "null", 00:25:14.432 "ffdhe2048", 00:25:14.432 "ffdhe3072", 00:25:14.432 "ffdhe4096", 00:25:14.432 "ffdhe6144", 00:25:14.432 "ffdhe8192" 00:25:14.432 ], 00:25:14.432 "dhchap_digests": [ 00:25:14.432 "sha256", 00:25:14.432 "sha384", 00:25:14.432 "sha512" 00:25:14.432 ], 00:25:14.432 "disable_auto_failback": false, 00:25:14.432 "fast_io_fail_timeout_sec": 0, 00:25:14.432 "generate_uuids": false, 00:25:14.432 "high_priority_weight": 0, 00:25:14.432 "io_path_stat": false, 00:25:14.432 "io_queue_requests": 0, 00:25:14.432 "keep_alive_timeout_ms": 10000, 00:25:14.432 "low_priority_weight": 0, 00:25:14.432 "medium_priority_weight": 0, 00:25:14.432 "nvme_adminq_poll_period_us": 10000, 00:25:14.432 "nvme_error_stat": false, 00:25:14.432 "nvme_ioq_poll_period_us": 0, 00:25:14.432 "rdma_cm_event_timeout_ms": 0, 00:25:14.432 "rdma_max_cq_size": 0, 00:25:14.432 "rdma_srq_size": 0, 00:25:14.432 "reconnect_delay_sec": 0, 00:25:14.432 "timeout_admin_us": 0, 00:25:14.432 "timeout_us": 0, 00:25:14.432 "transport_ack_timeout": 0, 00:25:14.432 "transport_retry_count": 4, 00:25:14.432 "transport_tos": 0 00:25:14.432 } 00:25:14.432 }, 00:25:14.432 { 00:25:14.432 "method": "bdev_nvme_set_hotplug", 00:25:14.432 "params": { 00:25:14.432 "enable": false, 00:25:14.432 "period_us": 100000 00:25:14.432 } 00:25:14.432 }, 00:25:14.432 { 00:25:14.432 "method": "bdev_malloc_create", 00:25:14.432 "params": { 00:25:14.432 "block_size": 4096, 00:25:14.432 "dif_is_head_of_md": false, 00:25:14.432 "dif_pi_format": 0, 00:25:14.432 "dif_type": 0, 00:25:14.432 "md_size": 0, 00:25:14.432 "name": "malloc0", 00:25:14.432 "num_blocks": 8192, 00:25:14.432 "optimal_io_boundary": 0, 00:25:14.432 "physical_block_size": 4096, 00:25:14.432 "uuid": "f1bd2d89-9f63-4baa-9d25-00ff15c5b147" 00:25:14.432 } 00:25:14.432 }, 00:25:14.432 { 00:25:14.432 "method": "bdev_wait_for_examine" 00:25:14.432 } 00:25:14.432 ] 00:25:14.432 }, 00:25:14.432 { 00:25:14.432 "subsystem": "nbd", 00:25:14.432 "config": [] 00:25:14.432 }, 00:25:14.432 { 00:25:14.432 "subsystem": "scheduler", 00:25:14.432 "config": [ 00:25:14.432 { 00:25:14.432 "method": "framework_set_scheduler", 00:25:14.432 "params": { 00:25:14.432 "name": "static" 00:25:14.432 } 00:25:14.432 } 00:25:14.432 ] 00:25:14.432 }, 00:25:14.432 { 00:25:14.432 "subsystem": "nvmf", 00:25:14.432 "config": [ 00:25:14.432 { 00:25:14.432 "method": "nvmf_set_config", 00:25:14.432 "params": { 00:25:14.432 "admin_cmd_passthru": { 00:25:14.432 "identify_ctrlr": false 00:25:14.432 }, 00:25:14.432 "dhchap_dhgroups": [ 00:25:14.432 "null", 00:25:14.432 "ffdhe2048", 00:25:14.432 "ffdhe3072", 00:25:14.432 "ffdhe4096", 00:25:14.432 "ffdhe6144", 00:25:14.432 "ffdhe8192" 00:25:14.432 ], 00:25:14.432 "dhchap_digests": [ 00:25:14.432 "sha256", 00:25:14.432 "sha384", 00:25:14.432 "sha512" 00:25:14.432 ], 00:25:14.432 "discovery_filter": "match_any" 00:25:14.432 } 00:25:14.432 }, 00:25:14.432 { 00:25:14.432 "method": "nvmf_set_max_subsystems", 00:25:14.432 "params": { 00:25:14.432 "max_subsystems": 1024 00:25:14.432 } 00:25:14.432 }, 00:25:14.432 { 00:25:14.432 "method": "nvmf_set_crdt", 00:25:14.432 "params": { 00:25:14.432 "crdt1": 0, 00:25:14.432 "crdt2": 0, 00:25:14.432 "crdt3": 0 00:25:14.432 } 00:25:14.432 }, 00:25:14.432 { 00:25:14.432 "method": "nvmf_create_transport", 00:25:14.432 "params": { 00:25:14.432 "abort_timeout_sec": 1, 00:25:14.432 "ack_timeout": 0, 00:25:14.432 "buf_cache_size": 4294967295, 00:25:14.432 "c2h_success": false, 00:25:14.432 "data_wr_pool_size": 0, 00:25:14.432 "dif_insert_or_strip": false, 00:25:14.432 "in_capsule_data_size": 4096, 00:25:14.432 "io_unit_size": 131072, 00:25:14.432 "max_aq_depth": 128, 00:25:14.432 "max_io_qpairs_per_ctrlr": 127, 00:25:14.432 "max_io_size": 131072, 00:25:14.432 "max_queue_depth": 128, 00:25:14.432 "num_shared_buffers": 511, 00:25:14.432 "sock_priority": 0, 00:25:14.432 "trtype": "TCP", 00:25:14.432 "zcopy": false 00:25:14.432 } 00:25:14.432 }, 00:25:14.432 { 00:25:14.432 "method": "nvmf_create_subsystem", 00:25:14.432 "params": { 00:25:14.432 "allow_any_host": false, 00:25:14.432 "ana_reporting": false, 00:25:14.432 "max_cntlid": 65519, 00:25:14.432 "max_namespaces": 10, 00:25:14.432 "min_cntlid": 1, 00:25:14.432 "model_number": "SPDK bdev Controller", 00:25:14.432 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:14.432 "serial_number": "SPDK00000000000001" 00:25:14.432 } 00:25:14.432 }, 00:25:14.432 { 00:25:14.432 "method": "nvmf_subsystem_add_host", 00:25:14.432 "params": { 00:25:14.432 "host": "nqn.2016-06.io.spdk:host1", 00:25:14.432 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:14.432 "psk": "key0" 00:25:14.432 } 00:25:14.432 }, 00:25:14.432 { 00:25:14.432 "method": "nvmf_subsystem_add_ns", 00:25:14.432 "params": { 00:25:14.432 "namespace": { 00:25:14.432 "bdev_name": "malloc0", 00:25:14.432 "nguid": "F1BD2D899F634BAA9D2500FF15C5B147", 00:25:14.432 "no_auto_visible": false, 00:25:14.432 "nsid": 1, 00:25:14.432 "uuid": "f1bd2d89-9f63-4baa-9d25-00ff15c5b147" 00:25:14.432 }, 00:25:14.432 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:25:14.432 } 00:25:14.432 }, 00:25:14.432 { 00:25:14.432 "method": "nvmf_subsystem_add_listener", 00:25:14.432 "params": { 00:25:14.432 "listen_address": { 00:25:14.432 "adrfam": "IPv4", 00:25:14.432 "traddr": "10.0.0.3", 00:25:14.432 "trsvcid": "4420", 00:25:14.432 "trtype": "TCP" 00:25:14.432 }, 00:25:14.432 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:14.432 "secure_channel": true 00:25:14.432 } 00:25:14.432 } 00:25:14.432 ] 00:25:14.432 } 00:25:14.432 ] 00:25:14.432 }' 00:25:14.432 12:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=82983 00:25:14.432 12:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 82983 00:25:14.432 12:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 82983 ']' 00:25:14.432 12:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:25:14.432 12:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:14.432 12:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:14.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:14.432 12:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:14.432 12:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:14.432 12:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:14.432 [2024-11-25 12:24:10.104838] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:25:14.432 [2024-11-25 12:24:10.104901] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:14.432 [2024-11-25 12:24:10.239713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:14.432 [2024-11-25 12:24:10.269726] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:14.432 [2024-11-25 12:24:10.269771] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:14.432 [2024-11-25 12:24:10.269777] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:14.432 [2024-11-25 12:24:10.269782] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:14.432 [2024-11-25 12:24:10.269786] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:14.432 [2024-11-25 12:24:10.270031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:14.690 [2024-11-25 12:24:10.461320] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:14.690 [2024-11-25 12:24:10.493263] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:14.690 [2024-11-25 12:24:10.493396] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:15.255 12:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:15.255 12:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:15.255 12:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:15.255 12:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:15.256 12:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:15.256 12:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:15.256 12:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=83027 00:25:15.256 12:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 83027 /var/tmp/bdevperf.sock 00:25:15.256 12:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83027 ']' 00:25:15.256 12:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:15.256 12:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:15.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:15.256 12:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:15.256 12:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:15.256 12:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:15.256 12:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:25:15.256 12:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:25:15.256 "subsystems": [ 00:25:15.256 { 00:25:15.256 "subsystem": "keyring", 00:25:15.256 "config": [ 00:25:15.256 { 00:25:15.256 "method": "keyring_file_add_key", 00:25:15.256 "params": { 00:25:15.256 "name": "key0", 00:25:15.256 "path": "/tmp/tmp.zndCmKUwR3" 00:25:15.256 } 00:25:15.256 } 00:25:15.256 ] 00:25:15.256 }, 00:25:15.256 { 00:25:15.256 "subsystem": "iobuf", 00:25:15.256 "config": [ 00:25:15.256 { 00:25:15.256 "method": "iobuf_set_options", 00:25:15.256 "params": { 00:25:15.256 "enable_numa": false, 00:25:15.256 "large_bufsize": 135168, 00:25:15.256 "large_pool_count": 1024, 00:25:15.256 "small_bufsize": 8192, 00:25:15.256 "small_pool_count": 8192 00:25:15.256 } 00:25:15.256 } 00:25:15.256 ] 00:25:15.256 }, 00:25:15.256 { 00:25:15.256 "subsystem": "sock", 00:25:15.256 "config": [ 00:25:15.256 { 00:25:15.256 "method": "sock_set_default_impl", 00:25:15.256 "params": { 00:25:15.256 "impl_name": "posix" 00:25:15.256 } 00:25:15.256 }, 00:25:15.256 { 00:25:15.256 "method": "sock_impl_set_options", 00:25:15.256 "params": { 00:25:15.256 "enable_ktls": false, 00:25:15.256 "enable_placement_id": 0, 00:25:15.256 "enable_quickack": false, 00:25:15.256 "enable_recv_pipe": true, 00:25:15.256 "enable_zerocopy_send_client": false, 00:25:15.256 "enable_zerocopy_send_server": true, 00:25:15.256 "impl_name": "ssl", 00:25:15.256 "recv_buf_size": 4096, 00:25:15.256 "send_buf_size": 4096, 00:25:15.256 "tls_version": 0, 00:25:15.256 "zerocopy_threshold": 0 00:25:15.256 } 00:25:15.256 }, 00:25:15.256 { 00:25:15.256 "method": "sock_impl_set_options", 00:25:15.256 "params": { 00:25:15.256 "enable_ktls": false, 00:25:15.256 "enable_placement_id": 0, 00:25:15.256 "enable_quickack": false, 00:25:15.256 "enable_recv_pipe": true, 00:25:15.256 "enable_zerocopy_send_client": false, 00:25:15.256 "enable_zerocopy_send_server": true, 00:25:15.256 "impl_name": "posix", 00:25:15.256 "recv_buf_size": 2097152, 00:25:15.256 "send_buf_size": 2097152, 00:25:15.256 "tls_version": 0, 00:25:15.256 "zerocopy_threshold": 0 00:25:15.256 } 00:25:15.256 } 00:25:15.256 ] 00:25:15.256 }, 00:25:15.256 { 00:25:15.256 "subsystem": "vmd", 00:25:15.256 "config": [] 00:25:15.256 }, 00:25:15.256 { 00:25:15.256 "subsystem": "accel", 00:25:15.256 "config": [ 00:25:15.256 { 00:25:15.256 "method": "accel_set_options", 00:25:15.256 "params": { 00:25:15.256 "buf_count": 2048, 00:25:15.256 "large_cache_size": 16, 00:25:15.256 "sequence_count": 2048, 00:25:15.256 "small_cache_size": 128, 00:25:15.256 "task_count": 2048 00:25:15.256 } 00:25:15.256 } 00:25:15.256 ] 00:25:15.256 }, 00:25:15.256 { 00:25:15.256 "subsystem": "bdev", 00:25:15.256 "config": [ 00:25:15.256 { 00:25:15.256 "method": "bdev_set_options", 00:25:15.256 "params": { 00:25:15.256 "bdev_auto_examine": true, 00:25:15.256 "bdev_io_cache_size": 256, 00:25:15.256 "bdev_io_pool_size": 65535, 00:25:15.256 "iobuf_large_cache_size": 16, 00:25:15.256 "iobuf_small_cache_size": 128 00:25:15.256 } 00:25:15.256 }, 00:25:15.256 { 00:25:15.256 "method": "bdev_raid_set_options", 00:25:15.256 "params": { 00:25:15.256 "process_max_bandwidth_mb_sec": 0, 00:25:15.256 "process_window_size_kb": 1024 00:25:15.256 } 00:25:15.256 }, 00:25:15.256 { 00:25:15.256 "method": "bdev_iscsi_set_options", 00:25:15.256 "params": { 00:25:15.256 "timeout_sec": 30 00:25:15.256 } 00:25:15.256 }, 00:25:15.256 { 00:25:15.256 "method": "bdev_nvme_set_options", 00:25:15.256 "params": { 00:25:15.256 "action_on_timeout": "none", 00:25:15.256 "allow_accel_sequence": false, 00:25:15.256 "arbitration_burst": 0, 00:25:15.256 "bdev_retry_count": 3, 00:25:15.256 "ctrlr_loss_timeout_sec": 0, 00:25:15.256 "delay_cmd_submit": true, 00:25:15.256 "dhchap_dhgroups": [ 00:25:15.256 "null", 00:25:15.256 "ffdhe2048", 00:25:15.256 "ffdhe3072", 00:25:15.256 "ffdhe4096", 00:25:15.256 "ffdhe6144", 00:25:15.256 "ffdhe8192" 00:25:15.256 ], 00:25:15.256 "dhchap_digests": [ 00:25:15.256 "sha256", 00:25:15.256 "sha384", 00:25:15.256 "sha512" 00:25:15.256 ], 00:25:15.256 "disable_auto_failback": false, 00:25:15.256 "fast_io_fail_timeout_sec": 0, 00:25:15.256 "generate_uuids": false, 00:25:15.256 "high_priority_weight": 0, 00:25:15.256 "io_path_stat": false, 00:25:15.256 "io_queue_requests": 512, 00:25:15.256 "keep_alive_timeout_ms": 10000, 00:25:15.256 "low_priority_weight": 0, 00:25:15.256 "medium_priority_weight": 0, 00:25:15.256 "nvme_adminq_poll_period_us": 10000, 00:25:15.256 "nvme_error_stat": false, 00:25:15.256 "nvme_ioq_poll_period_us": 0, 00:25:15.256 "rdma_cm_event_timeout_ms": 0, 00:25:15.256 "rdma_max_cq_size": 0, 00:25:15.256 "rdma_srq_size": 0, 00:25:15.256 "reconnect_delay_sec": 0, 00:25:15.256 "timeout_admin_us": 0, 00:25:15.256 "timeout_us": 0, 00:25:15.256 "transport_ack_timeout": 0, 00:25:15.256 "transport_retry_count": 4, 00:25:15.256 "transport_tos": 0 00:25:15.256 } 00:25:15.256 }, 00:25:15.256 { 00:25:15.256 "method": "bdev_nvme_attach_controller", 00:25:15.256 "params": { 00:25:15.256 "adrfam": "IPv4", 00:25:15.256 "ctrlr_loss_timeout_sec": 0, 00:25:15.256 "ddgst": false, 00:25:15.256 "fast_io_fail_timeout_sec": 0, 00:25:15.256 "hdgst": false, 00:25:15.256 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:15.256 "multipath": "multipath", 00:25:15.256 "name": "TLSTEST", 00:25:15.256 "prchk_guard": false, 00:25:15.256 "prchk_reftag": false, 00:25:15.256 "psk": "key0", 00:25:15.256 "reconnect_delay_sec": 0, 00:25:15.256 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:15.256 "traddr": "10.0.0.3", 00:25:15.256 "trsvcid": "4420", 00:25:15.256 "trtype": "TCP" 00:25:15.256 } 00:25:15.256 }, 00:25:15.256 { 00:25:15.256 "method": "bdev_nvme_set_hotplug", 00:25:15.256 "params": { 00:25:15.256 "enable": false, 00:25:15.256 "period_us": 100000 00:25:15.256 } 00:25:15.256 }, 00:25:15.256 { 00:25:15.256 "method": "bdev_wait_for_examine" 00:25:15.256 } 00:25:15.256 ] 00:25:15.256 }, 00:25:15.256 { 00:25:15.256 "subsystem": "nbd", 00:25:15.256 "config": [] 00:25:15.256 } 00:25:15.256 ] 00:25:15.256 }' 00:25:15.256 [2024-11-25 12:24:11.063796] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:25:15.256 [2024-11-25 12:24:11.063861] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83027 ] 00:25:15.514 [2024-11-25 12:24:11.203813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.514 [2024-11-25 12:24:11.240209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:15.514 [2024-11-25 12:24:11.378432] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:16.078 12:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:16.079 12:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:16.079 12:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:25:16.336 Running I/O for 10 seconds... 00:25:18.206 6469.00 IOPS, 25.27 MiB/s [2024-11-25T12:24:15.456Z] 6715.50 IOPS, 26.23 MiB/s [2024-11-25T12:24:16.391Z] 6835.00 IOPS, 26.70 MiB/s [2024-11-25T12:24:17.326Z] 6896.50 IOPS, 26.94 MiB/s [2024-11-25T12:24:18.258Z] 6917.20 IOPS, 27.02 MiB/s [2024-11-25T12:24:19.191Z] 6754.33 IOPS, 26.38 MiB/s [2024-11-25T12:24:20.124Z] 6642.71 IOPS, 25.95 MiB/s [2024-11-25T12:24:21.060Z] 6608.25 IOPS, 25.81 MiB/s [2024-11-25T12:24:22.434Z] 6653.56 IOPS, 25.99 MiB/s [2024-11-25T12:24:22.434Z] 6697.30 IOPS, 26.16 MiB/s 00:25:26.558 Latency(us) 00:25:26.558 [2024-11-25T12:24:22.434Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:26.558 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:26.558 Verification LBA range: start 0x0 length 0x2000 00:25:26.558 TLSTESTn1 : 10.01 6703.39 26.19 0.00 0.00 19064.31 3554.07 20467.40 00:25:26.558 [2024-11-25T12:24:22.434Z] =================================================================================================================== 00:25:26.558 [2024-11-25T12:24:22.434Z] Total : 6703.39 26.19 0.00 0.00 19064.31 3554.07 20467.40 00:25:26.558 { 00:25:26.558 "results": [ 00:25:26.558 { 00:25:26.558 "job": "TLSTESTn1", 00:25:26.558 "core_mask": "0x4", 00:25:26.558 "workload": "verify", 00:25:26.558 "status": "finished", 00:25:26.558 "verify_range": { 00:25:26.558 "start": 0, 00:25:26.558 "length": 8192 00:25:26.558 }, 00:25:26.558 "queue_depth": 128, 00:25:26.558 "io_size": 4096, 00:25:26.558 "runtime": 10.009709, 00:25:26.558 "iops": 6703.391677020781, 00:25:26.558 "mibps": 26.185123738362424, 00:25:26.558 "io_failed": 0, 00:25:26.558 "io_timeout": 0, 00:25:26.558 "avg_latency_us": 19064.305886480022, 00:25:26.558 "min_latency_us": 3554.067692307692, 00:25:26.558 "max_latency_us": 20467.39692307692 00:25:26.558 } 00:25:26.558 ], 00:25:26.558 "core_count": 1 00:25:26.558 } 00:25:26.558 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:26.558 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 83027 00:25:26.558 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83027 ']' 00:25:26.558 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83027 00:25:26.559 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:26.559 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:26.559 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83027 00:25:26.559 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:26.559 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:26.559 killing process with pid 83027 00:25:26.559 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83027' 00:25:26.559 Received shutdown signal, test time was about 10.000000 seconds 00:25:26.559 00:25:26.559 Latency(us) 00:25:26.559 [2024-11-25T12:24:22.435Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:26.559 [2024-11-25T12:24:22.435Z] =================================================================================================================== 00:25:26.559 [2024-11-25T12:24:22.435Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:26.559 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83027 00:25:26.559 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83027 00:25:26.559 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 82983 00:25:26.559 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 82983 ']' 00:25:26.559 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 82983 00:25:26.559 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:26.559 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:26.559 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82983 00:25:26.559 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:26.559 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:26.559 killing process with pid 82983 00:25:26.559 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82983' 00:25:26.559 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 82983 00:25:26.559 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 82983 00:25:26.559 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:25:26.559 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:26.559 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:26.559 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:26.559 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=83178 00:25:26.559 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 83178 00:25:26.559 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83178 ']' 00:25:26.559 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:26.559 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:26.559 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:26.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:26.559 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:26.559 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:26.559 12:24:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:26.559 [2024-11-25 12:24:22.346826] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:25:26.559 [2024-11-25 12:24:22.346878] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:26.817 [2024-11-25 12:24:22.482389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:26.817 [2024-11-25 12:24:22.517044] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:26.817 [2024-11-25 12:24:22.517084] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:26.817 [2024-11-25 12:24:22.517091] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:26.817 [2024-11-25 12:24:22.517096] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:26.817 [2024-11-25 12:24:22.517100] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:26.817 [2024-11-25 12:24:22.517372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:27.382 12:24:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:27.382 12:24:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:27.382 12:24:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:27.382 12:24:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:27.382 12:24:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:27.382 12:24:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:27.382 12:24:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.zndCmKUwR3 00:25:27.382 12:24:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.zndCmKUwR3 00:25:27.383 12:24:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:27.641 [2024-11-25 12:24:23.399548] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:27.641 12:24:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:27.899 12:24:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:25:28.157 [2024-11-25 12:24:23.799622] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:28.157 [2024-11-25 12:24:23.799801] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:28.157 12:24:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:28.157 malloc0 00:25:28.157 12:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:28.414 12:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.zndCmKUwR3 00:25:28.672 12:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:25:28.931 12:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=83288 00:25:28.931 12:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:28.931 12:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 83288 /var/tmp/bdevperf.sock 00:25:28.931 12:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83288 ']' 00:25:28.931 12:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:28.931 12:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:28.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:28.931 12:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:28.931 12:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:28.931 12:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:28.931 12:24:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:25:28.931 [2024-11-25 12:24:24.756531] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:25:28.931 [2024-11-25 12:24:24.756602] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83288 ] 00:25:29.246 [2024-11-25 12:24:24.894510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.246 [2024-11-25 12:24:24.931015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:29.812 12:24:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:29.812 12:24:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:29.812 12:24:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zndCmKUwR3 00:25:30.070 12:24:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:30.328 [2024-11-25 12:24:26.002764] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:30.328 nvme0n1 00:25:30.328 12:24:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:30.328 Running I/O for 1 seconds... 00:25:31.700 6270.00 IOPS, 24.49 MiB/s 00:25:31.700 Latency(us) 00:25:31.700 [2024-11-25T12:24:27.576Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:31.700 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:31.700 Verification LBA range: start 0x0 length 0x2000 00:25:31.700 nvme0n1 : 1.01 6337.41 24.76 0.00 0.00 20086.21 2646.65 15930.29 00:25:31.700 [2024-11-25T12:24:27.576Z] =================================================================================================================== 00:25:31.700 [2024-11-25T12:24:27.576Z] Total : 6337.41 24.76 0.00 0.00 20086.21 2646.65 15930.29 00:25:31.700 { 00:25:31.700 "results": [ 00:25:31.700 { 00:25:31.700 "job": "nvme0n1", 00:25:31.700 "core_mask": "0x2", 00:25:31.700 "workload": "verify", 00:25:31.700 "status": "finished", 00:25:31.700 "verify_range": { 00:25:31.700 "start": 0, 00:25:31.700 "length": 8192 00:25:31.700 }, 00:25:31.700 "queue_depth": 128, 00:25:31.700 "io_size": 4096, 00:25:31.700 "runtime": 1.009718, 00:25:31.700 "iops": 6337.413020269026, 00:25:31.700 "mibps": 24.755519610425882, 00:25:31.700 "io_failed": 0, 00:25:31.700 "io_timeout": 0, 00:25:31.700 "avg_latency_us": 20086.20850084749, 00:25:31.700 "min_latency_us": 2646.646153846154, 00:25:31.700 "max_latency_us": 15930.289230769231 00:25:31.700 } 00:25:31.700 ], 00:25:31.700 "core_count": 1 00:25:31.700 } 00:25:31.700 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 83288 00:25:31.700 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83288 ']' 00:25:31.700 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83288 00:25:31.700 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:31.701 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:31.701 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83288 00:25:31.701 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:31.701 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:31.701 killing process with pid 83288 00:25:31.701 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83288' 00:25:31.701 Received shutdown signal, test time was about 1.000000 seconds 00:25:31.701 00:25:31.701 Latency(us) 00:25:31.701 [2024-11-25T12:24:27.577Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:31.701 [2024-11-25T12:24:27.577Z] =================================================================================================================== 00:25:31.701 [2024-11-25T12:24:27.577Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:31.701 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83288 00:25:31.701 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83288 00:25:31.701 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 83178 00:25:31.701 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83178 ']' 00:25:31.701 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83178 00:25:31.701 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:31.701 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:31.701 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83178 00:25:31.701 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:31.701 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:31.701 killing process with pid 83178 00:25:31.701 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83178' 00:25:31.701 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83178 00:25:31.701 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83178 00:25:31.701 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:25:31.701 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:31.701 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:31.701 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:31.701 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=83352 00:25:31.701 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 83352 00:25:31.701 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83352 ']' 00:25:31.701 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:31.701 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:31.701 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:31.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:31.701 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:31.701 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:31.701 12:24:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:31.701 [2024-11-25 12:24:27.500601] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:25:31.701 [2024-11-25 12:24:27.500663] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:31.959 [2024-11-25 12:24:27.636291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.959 [2024-11-25 12:24:27.667478] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:31.959 [2024-11-25 12:24:27.667514] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:31.959 [2024-11-25 12:24:27.667520] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:31.959 [2024-11-25 12:24:27.667525] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:31.959 [2024-11-25 12:24:27.667530] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:31.959 [2024-11-25 12:24:27.667757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:32.528 12:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:32.528 12:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:32.528 12:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:32.528 12:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:32.528 12:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:32.786 12:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:32.786 12:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:25:32.786 12:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.786 12:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:32.786 [2024-11-25 12:24:28.413903] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:32.786 malloc0 00:25:32.786 [2024-11-25 12:24:28.439889] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:32.786 [2024-11-25 12:24:28.440021] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:32.786 12:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.786 12:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=83402 00:25:32.786 12:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 83402 /var/tmp/bdevperf.sock 00:25:32.786 12:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:25:32.786 12:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83402 ']' 00:25:32.786 12:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:32.786 12:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:32.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:32.786 12:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:32.786 12:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:32.786 12:24:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:32.786 [2024-11-25 12:24:28.507642] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:25:32.786 [2024-11-25 12:24:28.507700] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83402 ] 00:25:32.786 [2024-11-25 12:24:28.647517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.044 [2024-11-25 12:24:28.683340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:33.609 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:33.609 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:33.609 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.zndCmKUwR3 00:25:33.867 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:34.125 [2024-11-25 12:24:29.765310] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:34.125 nvme0n1 00:25:34.125 12:24:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:34.125 Running I/O for 1 seconds... 00:25:35.318 6236.00 IOPS, 24.36 MiB/s 00:25:35.318 Latency(us) 00:25:35.318 [2024-11-25T12:24:31.194Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:35.318 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:35.318 Verification LBA range: start 0x0 length 0x2000 00:25:35.318 nvme0n1 : 1.01 6308.08 24.64 0.00 0.00 20180.82 2558.42 16131.94 00:25:35.318 [2024-11-25T12:24:31.194Z] =================================================================================================================== 00:25:35.318 [2024-11-25T12:24:31.194Z] Total : 6308.08 24.64 0.00 0.00 20180.82 2558.42 16131.94 00:25:35.318 { 00:25:35.318 "results": [ 00:25:35.318 { 00:25:35.318 "job": "nvme0n1", 00:25:35.318 "core_mask": "0x2", 00:25:35.318 "workload": "verify", 00:25:35.318 "status": "finished", 00:25:35.318 "verify_range": { 00:25:35.318 "start": 0, 00:25:35.318 "length": 8192 00:25:35.318 }, 00:25:35.318 "queue_depth": 128, 00:25:35.318 "io_size": 4096, 00:25:35.318 "runtime": 1.009023, 00:25:35.318 "iops": 6308.082174539133, 00:25:35.318 "mibps": 24.64094599429349, 00:25:35.318 "io_failed": 0, 00:25:35.318 "io_timeout": 0, 00:25:35.318 "avg_latency_us": 20180.8170122666, 00:25:35.318 "min_latency_us": 2558.424615384615, 00:25:35.318 "max_latency_us": 16131.938461538462 00:25:35.318 } 00:25:35.318 ], 00:25:35.318 "core_count": 1 00:25:35.318 } 00:25:35.318 12:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:25:35.318 12:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.318 12:24:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:35.318 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.318 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:25:35.318 "subsystems": [ 00:25:35.318 { 00:25:35.318 "subsystem": "keyring", 00:25:35.318 "config": [ 00:25:35.318 { 00:25:35.318 "method": "keyring_file_add_key", 00:25:35.318 "params": { 00:25:35.318 "name": "key0", 00:25:35.318 "path": "/tmp/tmp.zndCmKUwR3" 00:25:35.318 } 00:25:35.318 } 00:25:35.318 ] 00:25:35.318 }, 00:25:35.318 { 00:25:35.318 "subsystem": "iobuf", 00:25:35.318 "config": [ 00:25:35.318 { 00:25:35.318 "method": "iobuf_set_options", 00:25:35.318 "params": { 00:25:35.318 "enable_numa": false, 00:25:35.318 "large_bufsize": 135168, 00:25:35.318 "large_pool_count": 1024, 00:25:35.318 "small_bufsize": 8192, 00:25:35.318 "small_pool_count": 8192 00:25:35.318 } 00:25:35.318 } 00:25:35.318 ] 00:25:35.318 }, 00:25:35.318 { 00:25:35.318 "subsystem": "sock", 00:25:35.318 "config": [ 00:25:35.318 { 00:25:35.318 "method": "sock_set_default_impl", 00:25:35.318 "params": { 00:25:35.318 "impl_name": "posix" 00:25:35.318 } 00:25:35.318 }, 00:25:35.318 { 00:25:35.318 "method": "sock_impl_set_options", 00:25:35.318 "params": { 00:25:35.318 "enable_ktls": false, 00:25:35.318 "enable_placement_id": 0, 00:25:35.318 "enable_quickack": false, 00:25:35.318 "enable_recv_pipe": true, 00:25:35.318 "enable_zerocopy_send_client": false, 00:25:35.318 "enable_zerocopy_send_server": true, 00:25:35.318 "impl_name": "ssl", 00:25:35.318 "recv_buf_size": 4096, 00:25:35.318 "send_buf_size": 4096, 00:25:35.318 "tls_version": 0, 00:25:35.318 "zerocopy_threshold": 0 00:25:35.318 } 00:25:35.318 }, 00:25:35.318 { 00:25:35.318 "method": "sock_impl_set_options", 00:25:35.318 "params": { 00:25:35.318 "enable_ktls": false, 00:25:35.318 "enable_placement_id": 0, 00:25:35.318 "enable_quickack": false, 00:25:35.318 "enable_recv_pipe": true, 00:25:35.318 "enable_zerocopy_send_client": false, 00:25:35.318 "enable_zerocopy_send_server": true, 00:25:35.318 "impl_name": "posix", 00:25:35.318 "recv_buf_size": 2097152, 00:25:35.318 "send_buf_size": 2097152, 00:25:35.318 "tls_version": 0, 00:25:35.318 "zerocopy_threshold": 0 00:25:35.318 } 00:25:35.318 } 00:25:35.318 ] 00:25:35.318 }, 00:25:35.318 { 00:25:35.318 "subsystem": "vmd", 00:25:35.318 "config": [] 00:25:35.318 }, 00:25:35.318 { 00:25:35.318 "subsystem": "accel", 00:25:35.318 "config": [ 00:25:35.318 { 00:25:35.318 "method": "accel_set_options", 00:25:35.318 "params": { 00:25:35.318 "buf_count": 2048, 00:25:35.318 "large_cache_size": 16, 00:25:35.318 "sequence_count": 2048, 00:25:35.318 "small_cache_size": 128, 00:25:35.318 "task_count": 2048 00:25:35.318 } 00:25:35.318 } 00:25:35.318 ] 00:25:35.318 }, 00:25:35.318 { 00:25:35.318 "subsystem": "bdev", 00:25:35.318 "config": [ 00:25:35.318 { 00:25:35.318 "method": "bdev_set_options", 00:25:35.318 "params": { 00:25:35.318 "bdev_auto_examine": true, 00:25:35.318 "bdev_io_cache_size": 256, 00:25:35.318 "bdev_io_pool_size": 65535, 00:25:35.318 "iobuf_large_cache_size": 16, 00:25:35.318 "iobuf_small_cache_size": 128 00:25:35.318 } 00:25:35.318 }, 00:25:35.318 { 00:25:35.318 "method": "bdev_raid_set_options", 00:25:35.318 "params": { 00:25:35.318 "process_max_bandwidth_mb_sec": 0, 00:25:35.318 "process_window_size_kb": 1024 00:25:35.318 } 00:25:35.318 }, 00:25:35.318 { 00:25:35.318 "method": "bdev_iscsi_set_options", 00:25:35.318 "params": { 00:25:35.318 "timeout_sec": 30 00:25:35.318 } 00:25:35.318 }, 00:25:35.318 { 00:25:35.318 "method": "bdev_nvme_set_options", 00:25:35.318 "params": { 00:25:35.318 "action_on_timeout": "none", 00:25:35.318 "allow_accel_sequence": false, 00:25:35.318 "arbitration_burst": 0, 00:25:35.318 "bdev_retry_count": 3, 00:25:35.318 "ctrlr_loss_timeout_sec": 0, 00:25:35.318 "delay_cmd_submit": true, 00:25:35.318 "dhchap_dhgroups": [ 00:25:35.318 "null", 00:25:35.318 "ffdhe2048", 00:25:35.318 "ffdhe3072", 00:25:35.318 "ffdhe4096", 00:25:35.318 "ffdhe6144", 00:25:35.318 "ffdhe8192" 00:25:35.318 ], 00:25:35.318 "dhchap_digests": [ 00:25:35.318 "sha256", 00:25:35.318 "sha384", 00:25:35.318 "sha512" 00:25:35.318 ], 00:25:35.318 "disable_auto_failback": false, 00:25:35.318 "fast_io_fail_timeout_sec": 0, 00:25:35.318 "generate_uuids": false, 00:25:35.318 "high_priority_weight": 0, 00:25:35.318 "io_path_stat": false, 00:25:35.318 "io_queue_requests": 0, 00:25:35.318 "keep_alive_timeout_ms": 10000, 00:25:35.318 "low_priority_weight": 0, 00:25:35.318 "medium_priority_weight": 0, 00:25:35.318 "nvme_adminq_poll_period_us": 10000, 00:25:35.318 "nvme_error_stat": false, 00:25:35.318 "nvme_ioq_poll_period_us": 0, 00:25:35.318 "rdma_cm_event_timeout_ms": 0, 00:25:35.318 "rdma_max_cq_size": 0, 00:25:35.318 "rdma_srq_size": 0, 00:25:35.318 "reconnect_delay_sec": 0, 00:25:35.318 "timeout_admin_us": 0, 00:25:35.318 "timeout_us": 0, 00:25:35.318 "transport_ack_timeout": 0, 00:25:35.319 "transport_retry_count": 4, 00:25:35.319 "transport_tos": 0 00:25:35.319 } 00:25:35.319 }, 00:25:35.319 { 00:25:35.319 "method": "bdev_nvme_set_hotplug", 00:25:35.319 "params": { 00:25:35.319 "enable": false, 00:25:35.319 "period_us": 100000 00:25:35.319 } 00:25:35.319 }, 00:25:35.319 { 00:25:35.319 "method": "bdev_malloc_create", 00:25:35.319 "params": { 00:25:35.319 "block_size": 4096, 00:25:35.319 "dif_is_head_of_md": false, 00:25:35.319 "dif_pi_format": 0, 00:25:35.319 "dif_type": 0, 00:25:35.319 "md_size": 0, 00:25:35.319 "name": "malloc0", 00:25:35.319 "num_blocks": 8192, 00:25:35.319 "optimal_io_boundary": 0, 00:25:35.319 "physical_block_size": 4096, 00:25:35.319 "uuid": "ef2ca017-b782-4532-8b71-f8680157d16c" 00:25:35.319 } 00:25:35.319 }, 00:25:35.319 { 00:25:35.319 "method": "bdev_wait_for_examine" 00:25:35.319 } 00:25:35.319 ] 00:25:35.319 }, 00:25:35.319 { 00:25:35.319 "subsystem": "nbd", 00:25:35.319 "config": [] 00:25:35.319 }, 00:25:35.319 { 00:25:35.319 "subsystem": "scheduler", 00:25:35.319 "config": [ 00:25:35.319 { 00:25:35.319 "method": "framework_set_scheduler", 00:25:35.319 "params": { 00:25:35.319 "name": "static" 00:25:35.319 } 00:25:35.319 } 00:25:35.319 ] 00:25:35.319 }, 00:25:35.319 { 00:25:35.319 "subsystem": "nvmf", 00:25:35.319 "config": [ 00:25:35.319 { 00:25:35.319 "method": "nvmf_set_config", 00:25:35.319 "params": { 00:25:35.319 "admin_cmd_passthru": { 00:25:35.319 "identify_ctrlr": false 00:25:35.319 }, 00:25:35.319 "dhchap_dhgroups": [ 00:25:35.319 "null", 00:25:35.319 "ffdhe2048", 00:25:35.319 "ffdhe3072", 00:25:35.319 "ffdhe4096", 00:25:35.319 "ffdhe6144", 00:25:35.319 "ffdhe8192" 00:25:35.319 ], 00:25:35.319 "dhchap_digests": [ 00:25:35.319 "sha256", 00:25:35.319 "sha384", 00:25:35.319 "sha512" 00:25:35.319 ], 00:25:35.319 "discovery_filter": "match_any" 00:25:35.319 } 00:25:35.319 }, 00:25:35.319 { 00:25:35.319 "method": "nvmf_set_max_subsystems", 00:25:35.319 "params": { 00:25:35.319 "max_subsystems": 1024 00:25:35.319 } 00:25:35.319 }, 00:25:35.319 { 00:25:35.319 "method": "nvmf_set_crdt", 00:25:35.319 "params": { 00:25:35.319 "crdt1": 0, 00:25:35.319 "crdt2": 0, 00:25:35.319 "crdt3": 0 00:25:35.319 } 00:25:35.319 }, 00:25:35.319 { 00:25:35.319 "method": "nvmf_create_transport", 00:25:35.319 "params": { 00:25:35.319 "abort_timeout_sec": 1, 00:25:35.319 "ack_timeout": 0, 00:25:35.319 "buf_cache_size": 4294967295, 00:25:35.319 "c2h_success": false, 00:25:35.319 "data_wr_pool_size": 0, 00:25:35.319 "dif_insert_or_strip": false, 00:25:35.319 "in_capsule_data_size": 4096, 00:25:35.319 "io_unit_size": 131072, 00:25:35.319 "max_aq_depth": 128, 00:25:35.319 "max_io_qpairs_per_ctrlr": 127, 00:25:35.319 "max_io_size": 131072, 00:25:35.319 "max_queue_depth": 128, 00:25:35.319 "num_shared_buffers": 511, 00:25:35.319 "sock_priority": 0, 00:25:35.319 "trtype": "TCP", 00:25:35.319 "zcopy": false 00:25:35.319 } 00:25:35.319 }, 00:25:35.319 { 00:25:35.319 "method": "nvmf_create_subsystem", 00:25:35.319 "params": { 00:25:35.319 "allow_any_host": false, 00:25:35.319 "ana_reporting": false, 00:25:35.319 "max_cntlid": 65519, 00:25:35.319 "max_namespaces": 32, 00:25:35.319 "min_cntlid": 1, 00:25:35.319 "model_number": "SPDK bdev Controller", 00:25:35.319 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:35.319 "serial_number": "00000000000000000000" 00:25:35.319 } 00:25:35.319 }, 00:25:35.319 { 00:25:35.319 "method": "nvmf_subsystem_add_host", 00:25:35.319 "params": { 00:25:35.319 "host": "nqn.2016-06.io.spdk:host1", 00:25:35.319 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:35.319 "psk": "key0" 00:25:35.319 } 00:25:35.319 }, 00:25:35.319 { 00:25:35.319 "method": "nvmf_subsystem_add_ns", 00:25:35.319 "params": { 00:25:35.319 "namespace": { 00:25:35.319 "bdev_name": "malloc0", 00:25:35.319 "nguid": "EF2CA017B78245328B71F8680157D16C", 00:25:35.319 "no_auto_visible": false, 00:25:35.319 "nsid": 1, 00:25:35.319 "uuid": "ef2ca017-b782-4532-8b71-f8680157d16c" 00:25:35.319 }, 00:25:35.319 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:25:35.319 } 00:25:35.319 }, 00:25:35.319 { 00:25:35.319 "method": "nvmf_subsystem_add_listener", 00:25:35.319 "params": { 00:25:35.319 "listen_address": { 00:25:35.319 "adrfam": "IPv4", 00:25:35.319 "traddr": "10.0.0.3", 00:25:35.319 "trsvcid": "4420", 00:25:35.319 "trtype": "TCP" 00:25:35.319 }, 00:25:35.319 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:35.319 "secure_channel": false, 00:25:35.319 "sock_impl": "ssl" 00:25:35.319 } 00:25:35.319 } 00:25:35.319 ] 00:25:35.319 } 00:25:35.319 ] 00:25:35.319 }' 00:25:35.319 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:25:35.578 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:25:35.578 "subsystems": [ 00:25:35.578 { 00:25:35.578 "subsystem": "keyring", 00:25:35.578 "config": [ 00:25:35.578 { 00:25:35.578 "method": "keyring_file_add_key", 00:25:35.578 "params": { 00:25:35.578 "name": "key0", 00:25:35.578 "path": "/tmp/tmp.zndCmKUwR3" 00:25:35.578 } 00:25:35.578 } 00:25:35.578 ] 00:25:35.578 }, 00:25:35.578 { 00:25:35.578 "subsystem": "iobuf", 00:25:35.578 "config": [ 00:25:35.578 { 00:25:35.578 "method": "iobuf_set_options", 00:25:35.578 "params": { 00:25:35.578 "enable_numa": false, 00:25:35.578 "large_bufsize": 135168, 00:25:35.578 "large_pool_count": 1024, 00:25:35.578 "small_bufsize": 8192, 00:25:35.578 "small_pool_count": 8192 00:25:35.578 } 00:25:35.578 } 00:25:35.578 ] 00:25:35.578 }, 00:25:35.578 { 00:25:35.578 "subsystem": "sock", 00:25:35.578 "config": [ 00:25:35.578 { 00:25:35.578 "method": "sock_set_default_impl", 00:25:35.578 "params": { 00:25:35.578 "impl_name": "posix" 00:25:35.578 } 00:25:35.578 }, 00:25:35.578 { 00:25:35.578 "method": "sock_impl_set_options", 00:25:35.578 "params": { 00:25:35.578 "enable_ktls": false, 00:25:35.578 "enable_placement_id": 0, 00:25:35.578 "enable_quickack": false, 00:25:35.578 "enable_recv_pipe": true, 00:25:35.578 "enable_zerocopy_send_client": false, 00:25:35.578 "enable_zerocopy_send_server": true, 00:25:35.578 "impl_name": "ssl", 00:25:35.578 "recv_buf_size": 4096, 00:25:35.578 "send_buf_size": 4096, 00:25:35.578 "tls_version": 0, 00:25:35.578 "zerocopy_threshold": 0 00:25:35.578 } 00:25:35.578 }, 00:25:35.578 { 00:25:35.578 "method": "sock_impl_set_options", 00:25:35.578 "params": { 00:25:35.578 "enable_ktls": false, 00:25:35.578 "enable_placement_id": 0, 00:25:35.578 "enable_quickack": false, 00:25:35.578 "enable_recv_pipe": true, 00:25:35.578 "enable_zerocopy_send_client": false, 00:25:35.578 "enable_zerocopy_send_server": true, 00:25:35.578 "impl_name": "posix", 00:25:35.578 "recv_buf_size": 2097152, 00:25:35.578 "send_buf_size": 2097152, 00:25:35.578 "tls_version": 0, 00:25:35.578 "zerocopy_threshold": 0 00:25:35.578 } 00:25:35.578 } 00:25:35.578 ] 00:25:35.578 }, 00:25:35.578 { 00:25:35.578 "subsystem": "vmd", 00:25:35.578 "config": [] 00:25:35.578 }, 00:25:35.578 { 00:25:35.578 "subsystem": "accel", 00:25:35.578 "config": [ 00:25:35.578 { 00:25:35.578 "method": "accel_set_options", 00:25:35.578 "params": { 00:25:35.578 "buf_count": 2048, 00:25:35.578 "large_cache_size": 16, 00:25:35.578 "sequence_count": 2048, 00:25:35.578 "small_cache_size": 128, 00:25:35.578 "task_count": 2048 00:25:35.578 } 00:25:35.578 } 00:25:35.578 ] 00:25:35.578 }, 00:25:35.578 { 00:25:35.578 "subsystem": "bdev", 00:25:35.578 "config": [ 00:25:35.578 { 00:25:35.578 "method": "bdev_set_options", 00:25:35.578 "params": { 00:25:35.578 "bdev_auto_examine": true, 00:25:35.578 "bdev_io_cache_size": 256, 00:25:35.578 "bdev_io_pool_size": 65535, 00:25:35.578 "iobuf_large_cache_size": 16, 00:25:35.578 "iobuf_small_cache_size": 128 00:25:35.578 } 00:25:35.578 }, 00:25:35.578 { 00:25:35.578 "method": "bdev_raid_set_options", 00:25:35.578 "params": { 00:25:35.578 "process_max_bandwidth_mb_sec": 0, 00:25:35.578 "process_window_size_kb": 1024 00:25:35.578 } 00:25:35.578 }, 00:25:35.578 { 00:25:35.578 "method": "bdev_iscsi_set_options", 00:25:35.578 "params": { 00:25:35.578 "timeout_sec": 30 00:25:35.578 } 00:25:35.578 }, 00:25:35.578 { 00:25:35.578 "method": "bdev_nvme_set_options", 00:25:35.578 "params": { 00:25:35.578 "action_on_timeout": "none", 00:25:35.578 "allow_accel_sequence": false, 00:25:35.578 "arbitration_burst": 0, 00:25:35.578 "bdev_retry_count": 3, 00:25:35.578 "ctrlr_loss_timeout_sec": 0, 00:25:35.578 "delay_cmd_submit": true, 00:25:35.578 "dhchap_dhgroups": [ 00:25:35.578 "null", 00:25:35.578 "ffdhe2048", 00:25:35.578 "ffdhe3072", 00:25:35.578 "ffdhe4096", 00:25:35.578 "ffdhe6144", 00:25:35.579 "ffdhe8192" 00:25:35.579 ], 00:25:35.579 "dhchap_digests": [ 00:25:35.579 "sha256", 00:25:35.579 "sha384", 00:25:35.579 "sha512" 00:25:35.579 ], 00:25:35.579 "disable_auto_failback": false, 00:25:35.579 "fast_io_fail_timeout_sec": 0, 00:25:35.579 "generate_uuids": false, 00:25:35.579 "high_priority_weight": 0, 00:25:35.579 "io_path_stat": false, 00:25:35.579 "io_queue_requests": 512, 00:25:35.579 "keep_alive_timeout_ms": 10000, 00:25:35.579 "low_priority_weight": 0, 00:25:35.579 "medium_priority_weight": 0, 00:25:35.579 "nvme_adminq_poll_period_us": 10000, 00:25:35.579 "nvme_error_stat": false, 00:25:35.579 "nvme_ioq_poll_period_us": 0, 00:25:35.579 "rdma_cm_event_timeout_ms": 0, 00:25:35.579 "rdma_max_cq_size": 0, 00:25:35.579 "rdma_srq_size": 0, 00:25:35.579 "reconnect_delay_sec": 0, 00:25:35.579 "timeout_admin_us": 0, 00:25:35.579 "timeout_us": 0, 00:25:35.579 "transport_ack_timeout": 0, 00:25:35.579 "transport_retry_count": 4, 00:25:35.579 "transport_tos": 0 00:25:35.579 } 00:25:35.579 }, 00:25:35.579 { 00:25:35.579 "method": "bdev_nvme_attach_controller", 00:25:35.579 "params": { 00:25:35.579 "adrfam": "IPv4", 00:25:35.579 "ctrlr_loss_timeout_sec": 0, 00:25:35.579 "ddgst": false, 00:25:35.579 "fast_io_fail_timeout_sec": 0, 00:25:35.579 "hdgst": false, 00:25:35.579 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:35.579 "multipath": "multipath", 00:25:35.579 "name": "nvme0", 00:25:35.579 "prchk_guard": false, 00:25:35.579 "prchk_reftag": false, 00:25:35.579 "psk": "key0", 00:25:35.579 "reconnect_delay_sec": 0, 00:25:35.579 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:35.579 "traddr": "10.0.0.3", 00:25:35.579 "trsvcid": "4420", 00:25:35.579 "trtype": "TCP" 00:25:35.579 } 00:25:35.579 }, 00:25:35.579 { 00:25:35.579 "method": "bdev_nvme_set_hotplug", 00:25:35.579 "params": { 00:25:35.579 "enable": false, 00:25:35.579 "period_us": 100000 00:25:35.579 } 00:25:35.579 }, 00:25:35.579 { 00:25:35.579 "method": "bdev_enable_histogram", 00:25:35.579 "params": { 00:25:35.579 "enable": true, 00:25:35.579 "name": "nvme0n1" 00:25:35.579 } 00:25:35.579 }, 00:25:35.579 { 00:25:35.579 "method": "bdev_wait_for_examine" 00:25:35.579 } 00:25:35.579 ] 00:25:35.579 }, 00:25:35.579 { 00:25:35.579 "subsystem": "nbd", 00:25:35.579 "config": [] 00:25:35.579 } 00:25:35.579 ] 00:25:35.579 }' 00:25:35.579 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 83402 00:25:35.579 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83402 ']' 00:25:35.579 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83402 00:25:35.579 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:35.579 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:35.579 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83402 00:25:35.579 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:35.579 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:35.579 killing process with pid 83402 00:25:35.579 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83402' 00:25:35.579 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83402 00:25:35.579 Received shutdown signal, test time was about 1.000000 seconds 00:25:35.579 00:25:35.579 Latency(us) 00:25:35.579 [2024-11-25T12:24:31.455Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:35.579 [2024-11-25T12:24:31.455Z] =================================================================================================================== 00:25:35.579 [2024-11-25T12:24:31.455Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:35.579 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83402 00:25:35.838 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 83352 00:25:35.838 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83352 ']' 00:25:35.838 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83352 00:25:35.838 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:35.838 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:35.838 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83352 00:25:35.838 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:35.838 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:35.838 killing process with pid 83352 00:25:35.838 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83352' 00:25:35.838 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83352 00:25:35.838 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83352 00:25:35.838 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:25:35.838 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:35.838 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:35.838 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:35.838 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:25:35.838 "subsystems": [ 00:25:35.838 { 00:25:35.838 "subsystem": "keyring", 00:25:35.838 "config": [ 00:25:35.838 { 00:25:35.838 "method": "keyring_file_add_key", 00:25:35.838 "params": { 00:25:35.838 "name": "key0", 00:25:35.838 "path": "/tmp/tmp.zndCmKUwR3" 00:25:35.838 } 00:25:35.838 } 00:25:35.838 ] 00:25:35.838 }, 00:25:35.838 { 00:25:35.838 "subsystem": "iobuf", 00:25:35.838 "config": [ 00:25:35.838 { 00:25:35.838 "method": "iobuf_set_options", 00:25:35.838 "params": { 00:25:35.838 "enable_numa": false, 00:25:35.838 "large_bufsize": 135168, 00:25:35.838 "large_pool_count": 1024, 00:25:35.838 "small_bufsize": 8192, 00:25:35.838 "small_pool_count": 8192 00:25:35.838 } 00:25:35.838 } 00:25:35.838 ] 00:25:35.838 }, 00:25:35.838 { 00:25:35.838 "subsystem": "sock", 00:25:35.838 "config": [ 00:25:35.838 { 00:25:35.838 "method": "sock_set_default_impl", 00:25:35.838 "params": { 00:25:35.838 "impl_name": "posix" 00:25:35.838 } 00:25:35.838 }, 00:25:35.838 { 00:25:35.838 "method": "sock_impl_set_options", 00:25:35.838 "params": { 00:25:35.838 "enable_ktls": false, 00:25:35.838 "enable_placement_id": 0, 00:25:35.838 "enable_quickack": false, 00:25:35.838 "enable_recv_pipe": true, 00:25:35.838 "enable_zerocopy_send_client": false, 00:25:35.838 "enable_zerocopy_send_server": true, 00:25:35.838 "impl_name": "ssl", 00:25:35.838 "recv_buf_size": 4096, 00:25:35.838 "send_buf_size": 4096, 00:25:35.838 "tls_version": 0, 00:25:35.838 "zerocopy_threshold": 0 00:25:35.838 } 00:25:35.838 }, 00:25:35.838 { 00:25:35.838 "method": "sock_impl_set_options", 00:25:35.838 "params": { 00:25:35.838 "enable_ktls": false, 00:25:35.838 "enable_placement_id": 0, 00:25:35.838 "enable_quickack": false, 00:25:35.838 "enable_recv_pipe": true, 00:25:35.838 "enable_zerocopy_send_client": false, 00:25:35.838 "enable_zerocopy_send_server": true, 00:25:35.838 "impl_name": "posix", 00:25:35.838 "recv_buf_size": 2097152, 00:25:35.838 "send_buf_size": 2097152, 00:25:35.838 "tls_version": 0, 00:25:35.838 "zerocopy_threshold": 0 00:25:35.838 } 00:25:35.838 } 00:25:35.838 ] 00:25:35.838 }, 00:25:35.838 { 00:25:35.838 "subsystem": "vmd", 00:25:35.838 "config": [] 00:25:35.838 }, 00:25:35.838 { 00:25:35.838 "subsystem": "accel", 00:25:35.838 "config": [ 00:25:35.838 { 00:25:35.838 "method": "accel_set_options", 00:25:35.838 "params": { 00:25:35.838 "buf_count": 2048, 00:25:35.838 "large_cache_size": 16, 00:25:35.838 "sequence_count": 2048, 00:25:35.838 "small_cache_size": 128, 00:25:35.838 "task_count": 2048 00:25:35.838 } 00:25:35.838 } 00:25:35.838 ] 00:25:35.838 }, 00:25:35.838 { 00:25:35.838 "subsystem": "bdev", 00:25:35.838 "config": [ 00:25:35.838 { 00:25:35.838 "method": "bdev_set_options", 00:25:35.838 "params": { 00:25:35.838 "bdev_auto_examine": true, 00:25:35.838 "bdev_io_cache_size": 256, 00:25:35.838 "bdev_io_pool_size": 65535, 00:25:35.838 "iobuf_large_cache_size": 16, 00:25:35.838 "iobuf_small_cache_size": 128 00:25:35.838 } 00:25:35.838 }, 00:25:35.838 { 00:25:35.838 "method": "bdev_raid_set_options", 00:25:35.838 "params": { 00:25:35.838 "process_max_bandwidth_mb_sec": 0, 00:25:35.838 "process_window_size_kb": 1024 00:25:35.838 } 00:25:35.838 }, 00:25:35.838 { 00:25:35.838 "method": "bdev_iscsi_set_options", 00:25:35.838 "params": { 00:25:35.838 "timeout_sec": 30 00:25:35.838 } 00:25:35.838 }, 00:25:35.838 { 00:25:35.838 "method": "bdev_nvme_set_options", 00:25:35.838 "params": { 00:25:35.838 "action_on_timeout": "none", 00:25:35.838 "allow_accel_sequence": false, 00:25:35.838 "arbitration_burst": 0, 00:25:35.838 "bdev_retry_count": 3, 00:25:35.838 "ctrlr_loss_timeout_sec": 0, 00:25:35.838 "delay_cmd_submit": true, 00:25:35.838 "dhchap_dhgroups": [ 00:25:35.838 "null", 00:25:35.838 "ffdhe2048", 00:25:35.838 "ffdhe3072", 00:25:35.838 "ffdhe4096", 00:25:35.838 "ffdhe6144", 00:25:35.838 "ffdhe8192" 00:25:35.838 ], 00:25:35.838 "dhchap_digests": [ 00:25:35.838 "sha256", 00:25:35.838 "sha384", 00:25:35.838 "sha512" 00:25:35.838 ], 00:25:35.838 "disable_auto_failback": false, 00:25:35.838 "fast_io_fail_timeout_sec": 0, 00:25:35.838 "generate_uuids": false, 00:25:35.838 "high_priority_weight": 0, 00:25:35.838 "io_path_stat": false, 00:25:35.838 "io_queue_requests": 0, 00:25:35.838 "keep_alive_timeout_ms": 10000, 00:25:35.838 "low_priority_weight": 0, 00:25:35.838 "medium_priority_weight": 0, 00:25:35.838 "nvme_adminq_poll_period_us": 10000, 00:25:35.838 "nvme_error_stat": false, 00:25:35.838 "nvme_ioq_poll_period_us": 0, 00:25:35.839 "rdma_cm_event_timeout_ms": 0, 00:25:35.839 "rdma_max_cq_size": 0, 00:25:35.839 "rdma_srq_size": 0, 00:25:35.839 "reconnect_delay_sec": 0, 00:25:35.839 "timeout_admin_us": 0, 00:25:35.839 "timeout_us": 0, 00:25:35.839 "transport_ack_timeout": 0, 00:25:35.839 "transport_retry_count": 4, 00:25:35.839 "transport_tos": 0 00:25:35.839 } 00:25:35.839 }, 00:25:35.839 { 00:25:35.839 "method": "bdev_nvme_set_hotplug", 00:25:35.839 "params": { 00:25:35.839 "enable": false, 00:25:35.839 "period_us": 100000 00:25:35.839 } 00:25:35.839 }, 00:25:35.839 { 00:25:35.839 "method": "bdev_malloc_create", 00:25:35.839 "params": { 00:25:35.839 "block_size": 4096, 00:25:35.839 "dif_is_head_of_md": false, 00:25:35.839 "dif_pi_format": 0, 00:25:35.839 "dif_type": 0, 00:25:35.839 "md_size": 0, 00:25:35.839 "name": "malloc0", 00:25:35.839 "num_blocks": 8192, 00:25:35.839 "optimal_io_boundary": 0, 00:25:35.839 "physical_block_size": 4096, 00:25:35.839 "uuid": "ef2ca017-b782-4532-8b71-f8680157d16c" 00:25:35.839 } 00:25:35.839 }, 00:25:35.839 { 00:25:35.839 "method": "bdev_wait_for_examine" 00:25:35.839 } 00:25:35.839 ] 00:25:35.839 }, 00:25:35.839 { 00:25:35.839 "subsystem": "nbd", 00:25:35.839 "config": [] 00:25:35.839 }, 00:25:35.839 { 00:25:35.839 "subsystem": "scheduler", 00:25:35.839 "config": [ 00:25:35.839 { 00:25:35.839 "method": "framework_set_scheduler", 00:25:35.839 "params": { 00:25:35.839 "name": "static" 00:25:35.839 } 00:25:35.839 } 00:25:35.839 ] 00:25:35.839 }, 00:25:35.839 { 00:25:35.839 "subsystem": "nvmf", 00:25:35.839 "config": [ 00:25:35.839 { 00:25:35.839 "method": "nvmf_set_config", 00:25:35.839 "params": { 00:25:35.839 "admin_cmd_passthru": { 00:25:35.839 "identify_ctrlr": false 00:25:35.839 }, 00:25:35.839 "dhchap_dhgroups": [ 00:25:35.839 "null", 00:25:35.839 "ffdhe2048", 00:25:35.839 "ffdhe3072", 00:25:35.839 "ffdhe4096", 00:25:35.839 "ffdhe6144", 00:25:35.839 "ffdhe8192" 00:25:35.839 ], 00:25:35.839 "dhchap_digests": [ 00:25:35.839 "sha256", 00:25:35.839 "sha384", 00:25:35.839 "sha512" 00:25:35.839 ], 00:25:35.839 "discovery_filter": "match_any" 00:25:35.839 } 00:25:35.839 }, 00:25:35.839 { 00:25:35.839 "method": "nvmf_set_max_subsystems", 00:25:35.839 "params": { 00:25:35.839 "max_subsystems": 1024 00:25:35.839 } 00:25:35.839 }, 00:25:35.839 { 00:25:35.839 "method": "nvmf_set_crdt", 00:25:35.839 "params": { 00:25:35.839 "crdt1": 0, 00:25:35.839 "crdt2": 0, 00:25:35.839 "crdt3": 0 00:25:35.839 } 00:25:35.839 }, 00:25:35.839 { 00:25:35.839 "method": "nvmf_create_transport", 00:25:35.839 "params": { 00:25:35.839 "abort_timeout_sec": 1, 00:25:35.839 "ack_timeout": 0, 00:25:35.839 "buf_cache_size": 4294967295, 00:25:35.839 "c2h_success": false, 00:25:35.839 "data_wr_pool_size": 0, 00:25:35.839 "dif_insert_or_strip": false, 00:25:35.839 "in_capsule_data_size": 4096, 00:25:35.839 "io_unit_size": 131072, 00:25:35.839 "max_aq_depth": 128, 00:25:35.839 "max_io_qpairs_per_ctrlr": 127, 00:25:35.839 "max_io_size": 131072, 00:25:35.839 "max_queue_depth": 128, 00:25:35.839 "num_shared_buffers": 511, 00:25:35.839 "sock_priority": 0, 00:25:35.839 "trtype": "TCP", 00:25:35.839 "zcopy": false 00:25:35.839 } 00:25:35.839 }, 00:25:35.839 { 00:25:35.839 "method": "nvmf_create_subsystem", 00:25:35.839 "params": { 00:25:35.839 "allow_any_host": false, 00:25:35.839 "ana_reporting": false, 00:25:35.839 "max_cntlid": 65519, 00:25:35.839 "max_namespaces": 32, 00:25:35.839 "min_cntlid": 1, 00:25:35.839 "model_number": "SPDK bdev Controller", 00:25:35.839 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:35.839 "serial_number": "00000000000000000000" 00:25:35.839 } 00:25:35.839 }, 00:25:35.839 { 00:25:35.839 "method": "nvmf_subsystem_add_host", 00:25:35.839 "params": { 00:25:35.839 "host": "nqn.2016-06.io.spdk:host1", 00:25:35.839 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:35.839 "psk": "key0" 00:25:35.839 } 00:25:35.839 }, 00:25:35.839 { 00:25:35.839 "method": "nvmf_subsystem_add_ns", 00:25:35.839 "params": { 00:25:35.839 "namespace": { 00:25:35.839 "bdev_name": "malloc0", 00:25:35.839 "nguid": "EF2CA017B78245328B71F8680157D16C", 00:25:35.839 "no_auto_visible": false, 00:25:35.839 "nsid": 1, 00:25:35.839 "uuid": "ef2ca017-b782-4532-8b71-f8680157d16c" 00:25:35.839 }, 00:25:35.839 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:25:35.839 } 00:25:35.839 }, 00:25:35.839 { 00:25:35.839 "method": "nvmf_subsystem_add_listener", 00:25:35.839 "params": { 00:25:35.839 "listen_address": { 00:25:35.839 "adrfam": "IPv4", 00:25:35.839 "traddr": "10.0.0.3", 00:25:35.839 "trsvcid": "4420", 00:25:35.839 "trtype": "TCP" 00:25:35.839 }, 00:25:35.839 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:35.839 "secure_channel": false, 00:25:35.839 "sock_impl": "ssl" 00:25:35.839 } 00:25:35.839 } 00:25:35.839 ] 00:25:35.839 } 00:25:35.839 ] 00:25:35.839 }' 00:25:35.839 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=83487 00:25:35.839 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 83487 00:25:35.839 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83487 ']' 00:25:35.839 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:35.839 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:25:35.839 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:35.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:35.839 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:35.839 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:35.839 12:24:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:35.839 [2024-11-25 12:24:31.627941] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:25:35.839 [2024-11-25 12:24:31.628282] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:36.097 [2024-11-25 12:24:31.765536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.097 [2024-11-25 12:24:31.796550] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:36.097 [2024-11-25 12:24:31.796687] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:36.097 [2024-11-25 12:24:31.796723] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:36.097 [2024-11-25 12:24:31.796770] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:36.097 [2024-11-25 12:24:31.796802] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:36.097 [2024-11-25 12:24:31.797069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:36.356 [2024-11-25 12:24:31.990460] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:36.356 [2024-11-25 12:24:32.022409] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:36.356 [2024-11-25 12:24:32.022625] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:36.921 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:36.921 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:36.921 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:36.921 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:36.921 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:36.921 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:36.921 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=83531 00:25:36.921 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 83531 /var/tmp/bdevperf.sock 00:25:36.921 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 83531 ']' 00:25:36.921 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:36.921 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:36.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:36.921 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:36.921 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:36.921 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:36.921 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:25:36.921 12:24:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:25:36.921 "subsystems": [ 00:25:36.921 { 00:25:36.921 "subsystem": "keyring", 00:25:36.921 "config": [ 00:25:36.921 { 00:25:36.921 "method": "keyring_file_add_key", 00:25:36.921 "params": { 00:25:36.921 "name": "key0", 00:25:36.921 "path": "/tmp/tmp.zndCmKUwR3" 00:25:36.921 } 00:25:36.921 } 00:25:36.921 ] 00:25:36.921 }, 00:25:36.921 { 00:25:36.921 "subsystem": "iobuf", 00:25:36.921 "config": [ 00:25:36.921 { 00:25:36.921 "method": "iobuf_set_options", 00:25:36.921 "params": { 00:25:36.921 "enable_numa": false, 00:25:36.921 "large_bufsize": 135168, 00:25:36.921 "large_pool_count": 1024, 00:25:36.921 "small_bufsize": 8192, 00:25:36.921 "small_pool_count": 8192 00:25:36.921 } 00:25:36.921 } 00:25:36.921 ] 00:25:36.921 }, 00:25:36.921 { 00:25:36.921 "subsystem": "sock", 00:25:36.921 "config": [ 00:25:36.921 { 00:25:36.921 "method": "sock_set_default_impl", 00:25:36.921 "params": { 00:25:36.921 "impl_name": "posix" 00:25:36.921 } 00:25:36.921 }, 00:25:36.921 { 00:25:36.921 "method": "sock_impl_set_options", 00:25:36.921 "params": { 00:25:36.921 "enable_ktls": false, 00:25:36.921 "enable_placement_id": 0, 00:25:36.921 "enable_quickack": false, 00:25:36.921 "enable_recv_pipe": true, 00:25:36.922 "enable_zerocopy_send_client": false, 00:25:36.922 "enable_zerocopy_send_server": true, 00:25:36.922 "impl_name": "ssl", 00:25:36.922 "recv_buf_size": 4096, 00:25:36.922 "send_buf_size": 4096, 00:25:36.922 "tls_version": 0, 00:25:36.922 "zerocopy_threshold": 0 00:25:36.922 } 00:25:36.922 }, 00:25:36.922 { 00:25:36.922 "method": "sock_impl_set_options", 00:25:36.922 "params": { 00:25:36.922 "enable_ktls": false, 00:25:36.922 "enable_placement_id": 0, 00:25:36.922 "enable_quickack": false, 00:25:36.922 "enable_recv_pipe": true, 00:25:36.922 "enable_zerocopy_send_client": false, 00:25:36.922 "enable_zerocopy_send_server": true, 00:25:36.922 "impl_name": "posix", 00:25:36.922 "recv_buf_size": 2097152, 00:25:36.922 "send_buf_size": 2097152, 00:25:36.922 "tls_version": 0, 00:25:36.922 "zerocopy_threshold": 0 00:25:36.922 } 00:25:36.922 } 00:25:36.922 ] 00:25:36.922 }, 00:25:36.922 { 00:25:36.922 "subsystem": "vmd", 00:25:36.922 "config": [] 00:25:36.922 }, 00:25:36.922 { 00:25:36.922 "subsystem": "accel", 00:25:36.922 "config": [ 00:25:36.922 { 00:25:36.922 "method": "accel_set_options", 00:25:36.922 "params": { 00:25:36.922 "buf_count": 2048, 00:25:36.922 "large_cache_size": 16, 00:25:36.922 "sequence_count": 2048, 00:25:36.922 "small_cache_size": 128, 00:25:36.922 "task_count": 2048 00:25:36.922 } 00:25:36.922 } 00:25:36.922 ] 00:25:36.922 }, 00:25:36.922 { 00:25:36.922 "subsystem": "bdev", 00:25:36.922 "config": [ 00:25:36.922 { 00:25:36.922 "method": "bdev_set_options", 00:25:36.922 "params": { 00:25:36.922 "bdev_auto_examine": true, 00:25:36.922 "bdev_io_cache_size": 256, 00:25:36.922 "bdev_io_pool_size": 65535, 00:25:36.922 "iobuf_large_cache_size": 16, 00:25:36.922 "iobuf_small_cache_size": 128 00:25:36.922 } 00:25:36.922 }, 00:25:36.922 { 00:25:36.922 "method": "bdev_raid_set_options", 00:25:36.922 "params": { 00:25:36.922 "process_max_bandwidth_mb_sec": 0, 00:25:36.922 "process_window_size_kb": 1024 00:25:36.922 } 00:25:36.922 }, 00:25:36.922 { 00:25:36.922 "method": "bdev_iscsi_set_options", 00:25:36.922 "params": { 00:25:36.922 "timeout_sec": 30 00:25:36.922 } 00:25:36.922 }, 00:25:36.922 { 00:25:36.922 "method": "bdev_nvme_set_options", 00:25:36.922 "params": { 00:25:36.922 "action_on_timeout": "none", 00:25:36.922 "allow_accel_sequence": false, 00:25:36.922 "arbitration_burst": 0, 00:25:36.922 "bdev_retry_count": 3, 00:25:36.922 "ctrlr_loss_timeout_sec": 0, 00:25:36.922 "delay_cmd_submit": true, 00:25:36.922 "dhchap_dhgroups": [ 00:25:36.922 "null", 00:25:36.922 "ffdhe2048", 00:25:36.922 "ffdhe3072", 00:25:36.922 "ffdhe4096", 00:25:36.922 "ffdhe6144", 00:25:36.922 "ffdhe8192" 00:25:36.922 ], 00:25:36.922 "dhchap_digests": [ 00:25:36.922 "sha256", 00:25:36.922 "sha384", 00:25:36.922 "sha512" 00:25:36.922 ], 00:25:36.922 "disable_auto_failback": false, 00:25:36.922 "fast_io_fail_timeout_sec": 0, 00:25:36.922 "generate_uuids": false, 00:25:36.922 "high_priority_weight": 0, 00:25:36.922 "io_path_stat": false, 00:25:36.922 "io_queue_requests": 512, 00:25:36.922 "keep_alive_timeout_ms": 10000, 00:25:36.922 "low_priority_weight": 0, 00:25:36.922 "medium_priority_weight": 0, 00:25:36.922 "nvme_adminq_poll_period_us": 10000, 00:25:36.922 "nvme_error_stat": false, 00:25:36.922 "nvme_ioq_poll_period_us": 0, 00:25:36.922 "rdma_cm_event_timeout_ms": 0, 00:25:36.922 "rdma_max_cq_size": 0, 00:25:36.922 "rdma_srq_size": 0, 00:25:36.922 "reconnect_delay_sec": 0, 00:25:36.922 "timeout_admin_us": 0, 00:25:36.922 "timeout_us": 0, 00:25:36.922 "transport_ack_timeout": 0, 00:25:36.922 "transport_retry_count": 4, 00:25:36.922 "transport_tos": 0 00:25:36.922 } 00:25:36.922 }, 00:25:36.922 { 00:25:36.922 "method": "bdev_nvme_attach_controller", 00:25:36.922 "params": { 00:25:36.922 "adrfam": "IPv4", 00:25:36.922 "ctrlr_loss_timeout_sec": 0, 00:25:36.922 "ddgst": false, 00:25:36.922 "fast_io_fail_timeout_sec": 0, 00:25:36.922 "hdgst": false, 00:25:36.922 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:36.922 "multipath": "multipath", 00:25:36.922 "name": "nvme0", 00:25:36.922 "prchk_guard": false, 00:25:36.922 "prchk_reftag": false, 00:25:36.922 "psk": "key0", 00:25:36.922 "reconnect_delay_sec": 0, 00:25:36.922 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:36.922 "traddr": "10.0.0.3", 00:25:36.922 "trsvcid": "4420", 00:25:36.922 "trtype": "TCP" 00:25:36.922 } 00:25:36.922 }, 00:25:36.922 { 00:25:36.922 "method": "bdev_nvme_set_hotplug", 00:25:36.922 "params": { 00:25:36.922 "enable": false, 00:25:36.922 "period_us": 100000 00:25:36.922 } 00:25:36.922 }, 00:25:36.922 { 00:25:36.922 "method": "bdev_enable_histogram", 00:25:36.922 "params": { 00:25:36.922 "enable": true, 00:25:36.922 "name": "nvme0n1" 00:25:36.922 } 00:25:36.922 }, 00:25:36.922 { 00:25:36.922 "method": "bdev_wait_for_examine" 00:25:36.922 } 00:25:36.922 ] 00:25:36.922 }, 00:25:36.922 { 00:25:36.922 "subsystem": "nbd", 00:25:36.922 "config": [] 00:25:36.922 } 00:25:36.922 ] 00:25:36.922 }' 00:25:36.922 [2024-11-25 12:24:32.558765] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:25:36.922 [2024-11-25 12:24:32.558822] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83531 ] 00:25:36.922 [2024-11-25 12:24:32.697184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.922 [2024-11-25 12:24:32.733087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:37.180 [2024-11-25 12:24:32.871699] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:37.747 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:37.747 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:37.747 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:37.747 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:25:38.005 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.005 12:24:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:38.005 Running I/O for 1 seconds... 00:25:38.939 7218.00 IOPS, 28.20 MiB/s 00:25:38.939 Latency(us) 00:25:38.939 [2024-11-25T12:24:34.815Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:38.939 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:38.939 Verification LBA range: start 0x0 length 0x2000 00:25:38.939 nvme0n1 : 1.01 7283.21 28.45 0.00 0.00 17459.33 3226.39 21677.29 00:25:38.939 [2024-11-25T12:24:34.815Z] =================================================================================================================== 00:25:38.939 [2024-11-25T12:24:34.815Z] Total : 7283.21 28.45 0.00 0.00 17459.33 3226.39 21677.29 00:25:38.939 { 00:25:38.939 "results": [ 00:25:38.939 { 00:25:38.939 "job": "nvme0n1", 00:25:38.939 "core_mask": "0x2", 00:25:38.939 "workload": "verify", 00:25:38.939 "status": "finished", 00:25:38.939 "verify_range": { 00:25:38.939 "start": 0, 00:25:38.939 "length": 8192 00:25:38.939 }, 00:25:38.939 "queue_depth": 128, 00:25:38.939 "io_size": 4096, 00:25:38.939 "runtime": 1.008758, 00:25:38.939 "iops": 7283.213615158443, 00:25:38.939 "mibps": 28.450053184212667, 00:25:38.939 "io_failed": 0, 00:25:38.939 "io_timeout": 0, 00:25:38.939 "avg_latency_us": 17459.330477955416, 00:25:38.939 "min_latency_us": 3226.3876923076923, 00:25:38.939 "max_latency_us": 21677.292307692307 00:25:38.939 } 00:25:38.939 ], 00:25:38.939 "core_count": 1 00:25:38.939 } 00:25:38.939 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:25:38.939 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:25:38.939 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:25:38.939 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:25:38.939 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:25:38.939 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:25:38.939 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:38.939 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:25:38.939 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:25:38.939 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:25:38.939 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:38.939 nvmf_trace.0 00:25:39.197 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:25:39.197 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 83531 00:25:39.197 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83531 ']' 00:25:39.197 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83531 00:25:39.197 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:39.197 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:39.197 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83531 00:25:39.197 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:39.197 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:39.197 killing process with pid 83531 00:25:39.198 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83531' 00:25:39.198 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83531 00:25:39.198 Received shutdown signal, test time was about 1.000000 seconds 00:25:39.198 00:25:39.198 Latency(us) 00:25:39.198 [2024-11-25T12:24:35.074Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:39.198 [2024-11-25T12:24:35.074Z] =================================================================================================================== 00:25:39.198 [2024-11-25T12:24:35.074Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:39.198 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83531 00:25:39.198 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:25:39.198 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:39.198 12:24:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:25:39.456 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:39.456 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:25:39.456 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:39.456 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:39.456 rmmod nvme_tcp 00:25:39.456 rmmod nvme_fabrics 00:25:39.456 rmmod nvme_keyring 00:25:39.456 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:39.456 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:25:39.456 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:25:39.456 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 83487 ']' 00:25:39.456 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 83487 00:25:39.456 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 83487 ']' 00:25:39.456 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 83487 00:25:39.456 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:39.456 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:39.456 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83487 00:25:39.456 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:39.456 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:39.456 killing process with pid 83487 00:25:39.456 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83487' 00:25:39.456 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 83487 00:25:39.456 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 83487 00:25:39.456 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:39.456 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:39.456 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:39.456 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:25:39.456 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:25:39.456 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:39.456 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:25:39.456 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:39.456 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:39.456 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:39.456 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:39.456 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:39.456 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:39.456 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:39.715 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:39.715 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:39.715 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:39.715 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:39.715 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:39.715 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:39.715 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:39.715 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:39.715 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:39.715 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:39.715 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:39.715 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:39.715 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:25:39.715 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.5iuyUsj1ea /tmp/tmp.klscqPGRo8 /tmp/tmp.zndCmKUwR3 00:25:39.715 00:25:39.715 real 1m20.181s 00:25:39.715 user 2m10.685s 00:25:39.715 sys 0m22.917s 00:25:39.715 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:39.715 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:39.715 ************************************ 00:25:39.715 END TEST nvmf_tls 00:25:39.715 ************************************ 00:25:39.715 12:24:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:39.715 12:24:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:39.715 12:24:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:39.715 12:24:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:39.715 ************************************ 00:25:39.715 START TEST nvmf_fips 00:25:39.715 ************************************ 00:25:39.715 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:39.715 * Looking for test storage... 00:25:39.715 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:25:39.715 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:39.715 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:25:39.715 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:39.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.975 --rc genhtml_branch_coverage=1 00:25:39.975 --rc genhtml_function_coverage=1 00:25:39.975 --rc genhtml_legend=1 00:25:39.975 --rc geninfo_all_blocks=1 00:25:39.975 --rc geninfo_unexecuted_blocks=1 00:25:39.975 00:25:39.975 ' 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:39.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.975 --rc genhtml_branch_coverage=1 00:25:39.975 --rc genhtml_function_coverage=1 00:25:39.975 --rc genhtml_legend=1 00:25:39.975 --rc geninfo_all_blocks=1 00:25:39.975 --rc geninfo_unexecuted_blocks=1 00:25:39.975 00:25:39.975 ' 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:39.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.975 --rc genhtml_branch_coverage=1 00:25:39.975 --rc genhtml_function_coverage=1 00:25:39.975 --rc genhtml_legend=1 00:25:39.975 --rc geninfo_all_blocks=1 00:25:39.975 --rc geninfo_unexecuted_blocks=1 00:25:39.975 00:25:39.975 ' 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:39.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.975 --rc genhtml_branch_coverage=1 00:25:39.975 --rc genhtml_function_coverage=1 00:25:39.975 --rc genhtml_legend=1 00:25:39.975 --rc geninfo_all_blocks=1 00:25:39.975 --rc geninfo_unexecuted_blocks=1 00:25:39.975 00:25:39.975 ' 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.975 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:39.976 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:25:39.976 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:25:40.236 Error setting digest 00:25:40.236 40923CC2B87F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:25:40.236 40923CC2B87F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:40.236 Cannot find device "nvmf_init_br" 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:40.236 Cannot find device "nvmf_init_br2" 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:40.236 Cannot find device "nvmf_tgt_br" 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:40.236 Cannot find device "nvmf_tgt_br2" 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:40.236 Cannot find device "nvmf_init_br" 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:40.236 Cannot find device "nvmf_init_br2" 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:40.236 Cannot find device "nvmf_tgt_br" 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:40.236 Cannot find device "nvmf_tgt_br2" 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:40.236 Cannot find device "nvmf_br" 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:40.236 Cannot find device "nvmf_init_if" 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:40.236 Cannot find device "nvmf_init_if2" 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:40.236 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:25:40.236 12:24:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:40.236 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:40.236 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:25:40.236 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:40.236 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:40.236 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:40.236 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:40.236 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:40.236 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:40.236 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:40.236 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:40.236 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:40.236 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:40.236 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:40.236 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:40.236 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:40.236 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:40.236 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:40.495 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:40.495 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:40.495 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:40.495 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:40.495 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:40.495 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:40.495 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:40.495 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:40.495 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:40.495 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:40.495 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:40.495 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:40.495 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:40.495 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:40.495 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:40.495 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:40.495 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:40.495 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:40.495 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:40.495 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:25:40.495 00:25:40.495 --- 10.0.0.3 ping statistics --- 00:25:40.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.495 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:25:40.495 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:40.496 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:40.496 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:25:40.496 00:25:40.496 --- 10.0.0.4 ping statistics --- 00:25:40.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.496 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:25:40.496 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:40.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:40.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:25:40.496 00:25:40.496 --- 10.0.0.1 ping statistics --- 00:25:40.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.496 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:25:40.496 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:40.496 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:40.496 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:25:40.496 00:25:40.496 --- 10.0.0.2 ping statistics --- 00:25:40.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.496 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:25:40.496 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:40.496 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:25:40.496 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:40.496 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:40.496 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:40.496 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:40.496 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:40.496 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:40.496 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:40.496 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:25:40.496 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:40.496 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:40.496 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:40.496 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=83860 00:25:40.496 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 83860 00:25:40.496 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 83860 ']' 00:25:40.496 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:40.496 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:40.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:40.496 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:40.496 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:40.496 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:40.496 12:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:40.496 [2024-11-25 12:24:36.278576] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:25:40.496 [2024-11-25 12:24:36.278642] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:40.753 [2024-11-25 12:24:36.408089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.753 [2024-11-25 12:24:36.438674] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:40.753 [2024-11-25 12:24:36.438710] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:40.753 [2024-11-25 12:24:36.438715] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:40.753 [2024-11-25 12:24:36.438719] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:40.753 [2024-11-25 12:24:36.438723] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:40.753 [2024-11-25 12:24:36.438950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:41.319 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:41.319 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:25:41.319 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:41.319 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:41.320 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:41.320 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:41.320 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:25:41.320 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:41.320 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:25:41.320 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.sGS 00:25:41.320 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:41.320 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.sGS 00:25:41.320 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.sGS 00:25:41.320 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.sGS 00:25:41.320 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:41.578 [2024-11-25 12:24:37.297235] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:41.578 [2024-11-25 12:24:37.313189] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:41.578 [2024-11-25 12:24:37.313316] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:41.578 malloc0 00:25:41.578 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:41.578 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=83914 00:25:41.578 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:41.578 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 83914 /var/tmp/bdevperf.sock 00:25:41.578 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 83914 ']' 00:25:41.578 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:41.578 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:41.578 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:41.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:41.578 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:41.578 12:24:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:41.578 [2024-11-25 12:24:37.421450] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:25:41.578 [2024-11-25 12:24:37.421627] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83914 ] 00:25:41.836 [2024-11-25 12:24:37.556928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.836 [2024-11-25 12:24:37.592768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:42.471 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:42.471 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:25:42.471 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.sGS 00:25:42.729 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:42.987 [2024-11-25 12:24:38.697551] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:42.987 TLSTESTn1 00:25:42.987 12:24:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:43.244 Running I/O for 10 seconds... 00:25:45.109 6458.00 IOPS, 25.23 MiB/s [2024-11-25T12:24:41.937Z] 6694.50 IOPS, 26.15 MiB/s [2024-11-25T12:24:43.311Z] 6765.00 IOPS, 26.43 MiB/s [2024-11-25T12:24:44.246Z] 6819.00 IOPS, 26.64 MiB/s [2024-11-25T12:24:45.181Z] 6882.60 IOPS, 26.89 MiB/s [2024-11-25T12:24:46.136Z] 6914.83 IOPS, 27.01 MiB/s [2024-11-25T12:24:47.090Z] 6916.43 IOPS, 27.02 MiB/s [2024-11-25T12:24:48.023Z] 6922.62 IOPS, 27.04 MiB/s [2024-11-25T12:24:48.956Z] 6928.11 IOPS, 27.06 MiB/s [2024-11-25T12:24:48.956Z] 6930.20 IOPS, 27.07 MiB/s 00:25:53.080 Latency(us) 00:25:53.080 [2024-11-25T12:24:48.956Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:53.080 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:53.080 Verification LBA range: start 0x0 length 0x2000 00:25:53.080 TLSTESTn1 : 10.01 6935.91 27.09 0.00 0.00 18424.80 3806.13 15526.99 00:25:53.080 [2024-11-25T12:24:48.956Z] =================================================================================================================== 00:25:53.080 [2024-11-25T12:24:48.956Z] Total : 6935.91 27.09 0.00 0.00 18424.80 3806.13 15526.99 00:25:53.080 { 00:25:53.080 "results": [ 00:25:53.080 { 00:25:53.080 "job": "TLSTESTn1", 00:25:53.080 "core_mask": "0x4", 00:25:53.080 "workload": "verify", 00:25:53.080 "status": "finished", 00:25:53.080 "verify_range": { 00:25:53.080 "start": 0, 00:25:53.080 "length": 8192 00:25:53.080 }, 00:25:53.080 "queue_depth": 128, 00:25:53.080 "io_size": 4096, 00:25:53.080 "runtime": 10.010072, 00:25:53.080 "iops": 6935.914147270869, 00:25:53.080 "mibps": 27.09341463777683, 00:25:53.080 "io_failed": 0, 00:25:53.080 "io_timeout": 0, 00:25:53.080 "avg_latency_us": 18424.800277427854, 00:25:53.080 "min_latency_us": 3806.1292307692306, 00:25:53.080 "max_latency_us": 15526.99076923077 00:25:53.080 } 00:25:53.080 ], 00:25:53.080 "core_count": 1 00:25:53.080 } 00:25:53.080 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:25:53.080 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:25:53.080 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:25:53.080 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:25:53.080 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:25:53.080 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:53.080 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:25:53.080 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:25:53.080 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:25:53.080 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:53.080 nvmf_trace.0 00:25:53.338 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:25:53.338 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 83914 00:25:53.338 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 83914 ']' 00:25:53.338 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 83914 00:25:53.338 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:25:53.338 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:53.338 12:24:48 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83914 00:25:53.338 killing process with pid 83914 00:25:53.338 Received shutdown signal, test time was about 10.000000 seconds 00:25:53.338 00:25:53.338 Latency(us) 00:25:53.338 [2024-11-25T12:24:49.214Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:53.338 [2024-11-25T12:24:49.214Z] =================================================================================================================== 00:25:53.338 [2024-11-25T12:24:49.214Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:53.338 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:53.338 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:53.338 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83914' 00:25:53.338 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 83914 00:25:53.339 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 83914 00:25:53.339 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:25:53.339 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:53.339 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:25:53.339 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:53.339 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:25:53.339 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:53.339 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:53.339 rmmod nvme_tcp 00:25:53.339 rmmod nvme_fabrics 00:25:53.339 rmmod nvme_keyring 00:25:53.339 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:53.339 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:25:53.339 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:25:53.339 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 83860 ']' 00:25:53.339 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 83860 00:25:53.339 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 83860 ']' 00:25:53.339 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 83860 00:25:53.339 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:25:53.339 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:53.596 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83860 00:25:53.596 killing process with pid 83860 00:25:53.596 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:53.596 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:53.596 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83860' 00:25:53.596 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 83860 00:25:53.596 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 83860 00:25:53.596 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:53.596 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:53.596 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:53.596 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:25:53.596 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:25:53.596 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:53.596 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:25:53.596 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:53.596 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:53.596 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:53.596 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:53.596 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:53.596 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:53.596 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:53.596 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:53.596 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:53.596 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:53.596 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:53.596 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:53.854 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:53.854 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:53.854 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:53.854 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:53.854 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:53.854 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:53.854 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:53.854 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:25:53.854 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.sGS 00:25:53.854 00:25:53.854 real 0m14.066s 00:25:53.854 user 0m20.321s 00:25:53.854 sys 0m4.595s 00:25:53.854 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:53.854 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:53.854 ************************************ 00:25:53.854 END TEST nvmf_fips 00:25:53.854 ************************************ 00:25:53.854 12:24:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:25:53.854 12:24:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:53.854 12:24:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:53.854 12:24:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:53.854 ************************************ 00:25:53.854 START TEST nvmf_control_msg_list 00:25:53.854 ************************************ 00:25:53.854 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:25:53.855 * Looking for test storage... 00:25:53.855 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:53.855 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:53.855 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:53.855 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:25:54.114 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:54.114 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:54.114 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:54.114 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:54.114 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:25:54.114 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:25:54.114 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:25:54.114 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:25:54.114 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:25:54.114 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:25:54.114 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:25:54.114 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:54.114 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:54.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:54.115 --rc genhtml_branch_coverage=1 00:25:54.115 --rc genhtml_function_coverage=1 00:25:54.115 --rc genhtml_legend=1 00:25:54.115 --rc geninfo_all_blocks=1 00:25:54.115 --rc geninfo_unexecuted_blocks=1 00:25:54.115 00:25:54.115 ' 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:54.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:54.115 --rc genhtml_branch_coverage=1 00:25:54.115 --rc genhtml_function_coverage=1 00:25:54.115 --rc genhtml_legend=1 00:25:54.115 --rc geninfo_all_blocks=1 00:25:54.115 --rc geninfo_unexecuted_blocks=1 00:25:54.115 00:25:54.115 ' 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:54.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:54.115 --rc genhtml_branch_coverage=1 00:25:54.115 --rc genhtml_function_coverage=1 00:25:54.115 --rc genhtml_legend=1 00:25:54.115 --rc geninfo_all_blocks=1 00:25:54.115 --rc geninfo_unexecuted_blocks=1 00:25:54.115 00:25:54.115 ' 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:54.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:54.115 --rc genhtml_branch_coverage=1 00:25:54.115 --rc genhtml_function_coverage=1 00:25:54.115 --rc genhtml_legend=1 00:25:54.115 --rc geninfo_all_blocks=1 00:25:54.115 --rc geninfo_unexecuted_blocks=1 00:25:54.115 00:25:54.115 ' 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:54.115 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:54.115 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:54.116 Cannot find device "nvmf_init_br" 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:54.116 Cannot find device "nvmf_init_br2" 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:54.116 Cannot find device "nvmf_tgt_br" 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:54.116 Cannot find device "nvmf_tgt_br2" 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:54.116 Cannot find device "nvmf_init_br" 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:54.116 Cannot find device "nvmf_init_br2" 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:54.116 Cannot find device "nvmf_tgt_br" 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:54.116 Cannot find device "nvmf_tgt_br2" 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:54.116 Cannot find device "nvmf_br" 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:54.116 Cannot find device "nvmf_init_if" 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:54.116 Cannot find device "nvmf_init_if2" 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:54.116 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:54.116 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:54.116 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:54.375 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:54.375 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:54.375 12:24:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:54.375 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:54.375 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:54.375 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:54.375 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:54.375 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:54.375 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:54.375 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:54.375 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:54.375 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:54.375 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:54.375 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:54.375 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:25:54.375 00:25:54.375 --- 10.0.0.3 ping statistics --- 00:25:54.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.375 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:25:54.375 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:54.375 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:54.375 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.034 ms 00:25:54.375 00:25:54.375 --- 10.0.0.4 ping statistics --- 00:25:54.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.375 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:25:54.375 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:54.375 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:54.375 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:25:54.375 00:25:54.375 --- 10.0.0.1 ping statistics --- 00:25:54.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.375 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:25:54.375 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:54.375 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:54.375 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:25:54.375 00:25:54.375 --- 10.0.0.2 ping statistics --- 00:25:54.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.375 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:25:54.375 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:54.375 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:25:54.375 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:54.375 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:54.375 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:54.375 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:54.375 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:54.375 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:54.375 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:54.375 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:25:54.375 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:54.375 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:54.376 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:54.376 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=84329 00:25:54.376 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 84329 00:25:54.376 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 84329 ']' 00:25:54.376 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:54.376 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:54.376 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:54.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:54.376 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:54.376 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:54.376 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:54.376 [2024-11-25 12:24:50.109163] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:25:54.376 [2024-11-25 12:24:50.109217] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:54.376 [2024-11-25 12:24:50.248293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.634 [2024-11-25 12:24:50.282578] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:54.634 [2024-11-25 12:24:50.282616] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:54.634 [2024-11-25 12:24:50.282623] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:54.634 [2024-11-25 12:24:50.282628] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:54.634 [2024-11-25 12:24:50.282632] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:54.634 [2024-11-25 12:24:50.282904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:55.201 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:55.201 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:25:55.201 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:55.201 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:55.201 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:55.201 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:55.201 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:55.201 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:25:55.201 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:25:55.201 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.201 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:55.201 [2024-11-25 12:24:50.981988] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:55.201 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.201 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:25:55.201 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.201 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:55.201 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.201 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:55.201 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.201 12:24:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:55.201 Malloc0 00:25:55.201 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.201 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:55.201 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.201 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:55.201 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.201 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:25:55.201 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:55.201 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:55.201 [2024-11-25 12:24:51.016573] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:55.201 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:55.201 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=84379 00:25:55.201 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:25:55.201 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=84380 00:25:55.201 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:25:55.201 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=84381 00:25:55.201 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 84379 00:25:55.201 12:24:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:25:55.459 [2024-11-25 12:24:51.185124] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:55.459 [2024-11-25 12:24:51.185546] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:55.459 [2024-11-25 12:24:51.185702] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:56.394 Initializing NVMe Controllers 00:25:56.394 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:25:56.394 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:25:56.394 Initialization complete. Launching workers. 00:25:56.394 ======================================================== 00:25:56.394 Latency(us) 00:25:56.394 Device Information : IOPS MiB/s Average min max 00:25:56.394 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 4704.00 18.38 212.30 112.44 488.00 00:25:56.394 ======================================================== 00:25:56.394 Total : 4704.00 18.38 212.30 112.44 488.00 00:25:56.394 00:25:56.394 Initializing NVMe Controllers 00:25:56.394 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:25:56.394 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:25:56.394 Initialization complete. Launching workers. 00:25:56.394 ======================================================== 00:25:56.394 Latency(us) 00:25:56.394 Device Information : IOPS MiB/s Average min max 00:25:56.394 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 4691.00 18.32 212.89 106.09 629.22 00:25:56.394 ======================================================== 00:25:56.394 Total : 4691.00 18.32 212.89 106.09 629.22 00:25:56.394 00:25:56.394 Initializing NVMe Controllers 00:25:56.394 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:25:56.394 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:25:56.394 Initialization complete. Launching workers. 00:25:56.394 ======================================================== 00:25:56.394 Latency(us) 00:25:56.394 Device Information : IOPS MiB/s Average min max 00:25:56.394 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 4700.00 18.36 212.49 145.26 454.68 00:25:56.394 ======================================================== 00:25:56.394 Total : 4700.00 18.36 212.49 145.26 454.68 00:25:56.394 00:25:56.394 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 84380 00:25:56.394 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 84381 00:25:56.394 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:56.394 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:25:56.394 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:56.394 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:25:56.652 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:56.652 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:25:56.652 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:56.652 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:56.652 rmmod nvme_tcp 00:25:56.652 rmmod nvme_fabrics 00:25:56.652 rmmod nvme_keyring 00:25:56.652 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:56.652 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:25:56.652 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:25:56.652 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 84329 ']' 00:25:56.653 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 84329 00:25:56.653 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 84329 ']' 00:25:56.653 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 84329 00:25:56.653 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:25:56.653 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:56.653 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84329 00:25:56.653 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:56.653 killing process with pid 84329 00:25:56.653 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:56.653 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84329' 00:25:56.653 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 84329 00:25:56.653 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 84329 00:25:56.653 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:56.653 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:56.653 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:56.653 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:25:56.653 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:25:56.653 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:56.653 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:25:56.653 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:56.653 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:56.653 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:56.653 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:56.653 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:56.911 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:56.911 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:56.911 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:56.911 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:56.911 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:56.911 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:56.911 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:56.911 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:56.911 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:56.911 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:56.911 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:56.911 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:56.911 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:56.911 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:56.911 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:25:56.911 00:25:56.911 real 0m3.092s 00:25:56.911 user 0m5.249s 00:25:56.911 sys 0m1.060s 00:25:56.911 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:56.911 ************************************ 00:25:56.911 END TEST nvmf_control_msg_list 00:25:56.911 ************************************ 00:25:56.911 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:56.911 12:24:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:56.911 12:24:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:56.911 12:24:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:56.911 12:24:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:56.911 ************************************ 00:25:56.911 START TEST nvmf_wait_for_buf 00:25:56.911 ************************************ 00:25:56.911 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:57.171 * Looking for test storage... 00:25:57.171 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:57.171 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:57.171 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:57.171 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:25:57.171 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:57.171 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:57.171 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:57.171 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:57.171 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:25:57.171 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:57.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.172 --rc genhtml_branch_coverage=1 00:25:57.172 --rc genhtml_function_coverage=1 00:25:57.172 --rc genhtml_legend=1 00:25:57.172 --rc geninfo_all_blocks=1 00:25:57.172 --rc geninfo_unexecuted_blocks=1 00:25:57.172 00:25:57.172 ' 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:57.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.172 --rc genhtml_branch_coverage=1 00:25:57.172 --rc genhtml_function_coverage=1 00:25:57.172 --rc genhtml_legend=1 00:25:57.172 --rc geninfo_all_blocks=1 00:25:57.172 --rc geninfo_unexecuted_blocks=1 00:25:57.172 00:25:57.172 ' 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:57.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.172 --rc genhtml_branch_coverage=1 00:25:57.172 --rc genhtml_function_coverage=1 00:25:57.172 --rc genhtml_legend=1 00:25:57.172 --rc geninfo_all_blocks=1 00:25:57.172 --rc geninfo_unexecuted_blocks=1 00:25:57.172 00:25:57.172 ' 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:57.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:57.172 --rc genhtml_branch_coverage=1 00:25:57.172 --rc genhtml_function_coverage=1 00:25:57.172 --rc genhtml_legend=1 00:25:57.172 --rc geninfo_all_blocks=1 00:25:57.172 --rc geninfo_unexecuted_blocks=1 00:25:57.172 00:25:57.172 ' 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:57.172 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:57.172 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:57.173 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:57.173 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:57.173 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:57.173 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:57.173 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:57.173 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:57.173 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:57.173 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:57.173 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:57.173 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:57.173 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:57.173 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:57.173 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:57.173 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:57.173 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:57.173 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:57.173 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:57.173 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:57.173 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:57.173 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:57.173 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:57.173 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:57.173 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:57.173 Cannot find device "nvmf_init_br" 00:25:57.173 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:25:57.173 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:57.173 Cannot find device "nvmf_init_br2" 00:25:57.173 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:25:57.173 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:57.173 Cannot find device "nvmf_tgt_br" 00:25:57.173 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:25:57.173 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:57.173 Cannot find device "nvmf_tgt_br2" 00:25:57.173 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:25:57.173 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:57.173 Cannot find device "nvmf_init_br" 00:25:57.173 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:25:57.173 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:57.173 Cannot find device "nvmf_init_br2" 00:25:57.173 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:25:57.173 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:57.173 Cannot find device "nvmf_tgt_br" 00:25:57.173 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:25:57.173 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:57.173 Cannot find device "nvmf_tgt_br2" 00:25:57.173 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:25:57.173 12:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:57.173 Cannot find device "nvmf_br" 00:25:57.173 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:25:57.173 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:57.173 Cannot find device "nvmf_init_if" 00:25:57.173 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:25:57.173 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:57.173 Cannot find device "nvmf_init_if2" 00:25:57.173 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:25:57.173 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:57.173 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:57.173 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:25:57.173 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:57.173 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:57.173 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:25:57.173 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:57.173 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:57.173 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:57.438 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:57.438 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:57.438 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:57.438 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:57.438 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:57.438 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:57.438 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:57.438 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:57.438 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:57.438 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:57.438 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:57.438 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:57.438 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:57.438 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:57.438 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:57.438 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:57.438 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:57.438 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:57.438 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:57.438 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:57.438 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:57.438 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:57.438 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:57.438 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:57.439 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:57.439 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:57.439 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:57.439 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:57.439 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:57.439 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:57.439 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:57.439 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:25:57.439 00:25:57.439 --- 10.0.0.3 ping statistics --- 00:25:57.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:57.439 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:25:57.439 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:57.439 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:57.439 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.038 ms 00:25:57.439 00:25:57.439 --- 10.0.0.4 ping statistics --- 00:25:57.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:57.439 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:25:57.439 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:57.439 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:57.439 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:25:57.439 00:25:57.439 --- 10.0.0.1 ping statistics --- 00:25:57.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:57.439 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:25:57.439 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:57.439 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:57.439 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:25:57.439 00:25:57.439 --- 10.0.0.2 ping statistics --- 00:25:57.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:57.439 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:25:57.439 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:57.439 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:25:57.439 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:57.439 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:57.439 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:57.439 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:57.439 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:57.439 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:57.439 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:57.439 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:25:57.439 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:57.439 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:57.439 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:57.439 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=84610 00:25:57.439 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 84610 00:25:57.439 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 84610 ']' 00:25:57.439 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:57.439 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:57.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:57.439 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:57.439 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:57.439 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:57.439 12:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:57.439 [2024-11-25 12:24:53.294801] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:25:57.439 [2024-11-25 12:24:53.294856] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:57.698 [2024-11-25 12:24:53.436041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:57.698 [2024-11-25 12:24:53.470560] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:57.698 [2024-11-25 12:24:53.470597] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:57.698 [2024-11-25 12:24:53.470603] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:57.698 [2024-11-25 12:24:53.470608] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:57.698 [2024-11-25 12:24:53.470613] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:57.698 [2024-11-25 12:24:53.470883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.631 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:58.631 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:25:58.631 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:58.631 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:58.631 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:58.631 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:58.631 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:58.631 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:25:58.631 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:25:58.631 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.631 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:58.631 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.631 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:25:58.631 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.631 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:58.631 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.631 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:25:58.631 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.631 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:58.631 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.631 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:58.631 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.631 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:58.631 Malloc0 00:25:58.631 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.631 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:25:58.631 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.631 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:58.631 [2024-11-25 12:24:54.295721] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:58.631 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.631 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:25:58.631 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.631 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:58.631 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.631 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:58.631 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.631 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:58.631 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.631 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:25:58.631 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.631 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:58.631 [2024-11-25 12:24:54.319796] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:58.631 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.631 12:24:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:25:58.889 [2024-11-25 12:24:54.513845] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:00.263 Initializing NVMe Controllers 00:26:00.263 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:26:00.263 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:26:00.263 Initialization complete. Launching workers. 00:26:00.263 ======================================================== 00:26:00.263 Latency(us) 00:26:00.263 Device Information : IOPS MiB/s Average min max 00:26:00.263 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 127.49 15.94 32498.24 8017.15 64008.88 00:26:00.263 ======================================================== 00:26:00.263 Total : 127.49 15.94 32498.24 8017.15 64008.88 00:26:00.263 00:26:00.263 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:26:00.263 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.263 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:00.263 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:26:00.263 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.263 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2022 00:26:00.263 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2022 -eq 0 ]] 00:26:00.263 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:26:00.263 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:26:00.263 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:00.263 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:26:00.263 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:00.263 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:26:00.263 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:00.263 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:00.263 rmmod nvme_tcp 00:26:00.263 rmmod nvme_fabrics 00:26:00.263 rmmod nvme_keyring 00:26:00.263 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:00.263 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:26:00.263 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:26:00.264 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 84610 ']' 00:26:00.264 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 84610 00:26:00.264 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 84610 ']' 00:26:00.264 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 84610 00:26:00.264 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:26:00.264 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:00.264 12:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84610 00:26:00.264 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:00.264 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:00.264 killing process with pid 84610 00:26:00.264 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84610' 00:26:00.264 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 84610 00:26:00.264 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 84610 00:26:00.264 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:00.264 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:00.264 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:00.264 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:26:00.264 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:26:00.264 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:00.264 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:26:00.264 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:00.264 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:00.264 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:00.264 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:00.264 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:00.521 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:00.521 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:00.521 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:00.521 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:00.521 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:00.521 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:00.521 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:00.521 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:00.521 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:00.521 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:00.521 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:00.521 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:00.521 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:00.521 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:00.521 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:26:00.521 00:26:00.521 real 0m3.564s 00:26:00.521 user 0m3.180s 00:26:00.521 sys 0m0.580s 00:26:00.521 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:00.521 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:00.521 ************************************ 00:26:00.521 END TEST nvmf_wait_for_buf 00:26:00.521 ************************************ 00:26:00.521 12:24:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:26:00.521 12:24:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:26:00.521 12:24:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:26:00.521 12:24:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:00.521 12:24:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:00.521 12:24:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:00.521 ************************************ 00:26:00.521 START TEST nvmf_nsid 00:26:00.521 ************************************ 00:26:00.521 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:26:00.779 * Looking for test storage... 00:26:00.779 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:00.779 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:00.779 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:26:00.779 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:00.779 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:00.779 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:00.779 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:00.779 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:00.779 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:26:00.779 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:26:00.779 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:26:00.779 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:26:00.779 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:26:00.779 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:26:00.779 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:26:00.779 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:00.779 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:26:00.779 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:26:00.779 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:00.779 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:00.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:00.780 --rc genhtml_branch_coverage=1 00:26:00.780 --rc genhtml_function_coverage=1 00:26:00.780 --rc genhtml_legend=1 00:26:00.780 --rc geninfo_all_blocks=1 00:26:00.780 --rc geninfo_unexecuted_blocks=1 00:26:00.780 00:26:00.780 ' 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:00.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:00.780 --rc genhtml_branch_coverage=1 00:26:00.780 --rc genhtml_function_coverage=1 00:26:00.780 --rc genhtml_legend=1 00:26:00.780 --rc geninfo_all_blocks=1 00:26:00.780 --rc geninfo_unexecuted_blocks=1 00:26:00.780 00:26:00.780 ' 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:00.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:00.780 --rc genhtml_branch_coverage=1 00:26:00.780 --rc genhtml_function_coverage=1 00:26:00.780 --rc genhtml_legend=1 00:26:00.780 --rc geninfo_all_blocks=1 00:26:00.780 --rc geninfo_unexecuted_blocks=1 00:26:00.780 00:26:00.780 ' 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:00.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:00.780 --rc genhtml_branch_coverage=1 00:26:00.780 --rc genhtml_function_coverage=1 00:26:00.780 --rc genhtml_legend=1 00:26:00.780 --rc geninfo_all_blocks=1 00:26:00.780 --rc geninfo_unexecuted_blocks=1 00:26:00.780 00:26:00.780 ' 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:00.780 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:00.780 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:00.781 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:00.781 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:00.781 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:00.781 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:00.781 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:00.781 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:00.781 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:00.781 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:00.781 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:00.781 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:00.781 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:00.781 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:00.781 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:00.781 Cannot find device "nvmf_init_br" 00:26:00.781 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:26:00.781 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:00.781 Cannot find device "nvmf_init_br2" 00:26:00.781 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:26:00.781 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:00.781 Cannot find device "nvmf_tgt_br" 00:26:00.781 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:26:00.781 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:00.781 Cannot find device "nvmf_tgt_br2" 00:26:00.781 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:26:00.781 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:00.781 Cannot find device "nvmf_init_br" 00:26:00.781 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:26:00.781 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:00.781 Cannot find device "nvmf_init_br2" 00:26:00.781 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:26:00.781 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:00.781 Cannot find device "nvmf_tgt_br" 00:26:00.781 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:26:00.781 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:00.781 Cannot find device "nvmf_tgt_br2" 00:26:00.781 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:26:00.781 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:00.781 Cannot find device "nvmf_br" 00:26:00.781 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:26:00.781 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:00.781 Cannot find device "nvmf_init_if" 00:26:00.781 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:26:00.781 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:00.781 Cannot find device "nvmf_init_if2" 00:26:00.781 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:26:00.781 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:00.781 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:00.781 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:26:00.781 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:00.781 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:00.781 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:26:00.781 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:00.781 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:01.039 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:01.039 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:01.039 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:01.039 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:01.039 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:01.039 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:01.039 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:01.039 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:01.039 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:01.039 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:01.039 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:01.039 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:01.039 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:01.039 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:01.039 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:01.039 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:01.039 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:01.039 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:01.039 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:01.039 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:01.039 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:01.039 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:01.039 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:01.039 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:01.039 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:01.039 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:01.039 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:01.039 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:01.039 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:01.039 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:01.039 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:01.039 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:01.039 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:26:01.039 00:26:01.039 --- 10.0.0.3 ping statistics --- 00:26:01.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:01.039 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:26:01.039 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:01.039 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:01.039 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.076 ms 00:26:01.039 00:26:01.039 --- 10.0.0.4 ping statistics --- 00:26:01.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:01.039 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:26:01.039 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:01.039 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:01.039 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:26:01.039 00:26:01.039 --- 10.0.0.1 ping statistics --- 00:26:01.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:01.039 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:26:01.039 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:01.039 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:01.039 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:26:01.039 00:26:01.039 --- 10.0.0.2 ping statistics --- 00:26:01.039 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:01.039 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:26:01.039 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:01.039 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:26:01.039 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:01.039 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:01.039 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:01.039 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:01.039 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:01.039 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:01.039 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:01.039 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:26:01.039 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:01.039 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:01.039 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:01.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:01.039 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=84889 00:26:01.039 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 84889 00:26:01.039 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 84889 ']' 00:26:01.039 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:01.039 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:01.040 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:01.040 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:01.040 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:01.040 12:24:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:26:01.297 [2024-11-25 12:24:56.917698] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:26:01.297 [2024-11-25 12:24:56.917895] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:01.297 [2024-11-25 12:24:57.058176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.297 [2024-11-25 12:24:57.093094] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:01.297 [2024-11-25 12:24:57.093222] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:01.297 [2024-11-25 12:24:57.093280] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:01.297 [2024-11-25 12:24:57.093307] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:01.297 [2024-11-25 12:24:57.093323] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:01.297 [2024-11-25 12:24:57.093589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:02.231 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:02.231 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:26:02.231 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:02.231 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:02.231 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:02.231 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:02.231 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:02.231 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=84933 00:26:02.231 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:26:02.231 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:26:02.231 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:26:02.231 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:26:02.231 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:02.231 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:02.231 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:02.231 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:02.231 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:02.231 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:02.231 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:02.231 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:02.231 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:02.231 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:26:02.231 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:26:02.231 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=572c94a4-74cf-4ed3-a51d-f46f81a56e28 00:26:02.231 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:26:02.231 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=16b3238f-691a-4b5b-a8ef-7867b690a8f6 00:26:02.231 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:26:02.231 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=216db51b-6133-48af-86f2-bbc431118851 00:26:02.231 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:26:02.231 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.231 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:02.231 null0 00:26:02.231 null1 00:26:02.231 [2024-11-25 12:24:57.864430] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:26:02.231 null2 00:26:02.231 [2024-11-25 12:24:57.864903] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84933 ] 00:26:02.231 [2024-11-25 12:24:57.871896] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:02.231 [2024-11-25 12:24:57.895967] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:02.231 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.231 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 84933 /var/tmp/tgt2.sock 00:26:02.231 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 84933 ']' 00:26:02.231 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:26:02.231 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:02.231 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:26:02.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:26:02.231 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:02.231 12:24:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:02.231 [2024-11-25 12:24:58.006359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:02.231 [2024-11-25 12:24:58.041844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:02.489 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:02.489 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:26:02.489 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:26:03.055 [2024-11-25 12:24:58.627399] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:03.055 [2024-11-25 12:24:58.643452] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:26:03.055 nvme0n1 nvme0n2 00:26:03.055 nvme1n1 00:26:03.055 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:26:03.055 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:26:03.055 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:26:03.055 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:26:03.055 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:26:03.055 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:26:03.055 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:26:03.055 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:26:03.055 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:26:03.055 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:26:03.055 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:26:03.055 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:26:03.055 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:26:03.055 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:26:03.055 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:26:03.055 12:24:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:26:03.988 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:26:03.988 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:26:03.988 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:26:03.988 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:26:03.988 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:26:03.988 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 572c94a4-74cf-4ed3-a51d-f46f81a56e28 00:26:03.988 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:26:03.988 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:26:03.988 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:26:03.988 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:26:03.988 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:26:04.246 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=572c94a474cf4ed3a51df46f81a56e28 00:26:04.246 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 572C94A474CF4ED3A51DF46F81A56E28 00:26:04.246 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 572C94A474CF4ED3A51DF46F81A56E28 == \5\7\2\C\9\4\A\4\7\4\C\F\4\E\D\3\A\5\1\D\F\4\6\F\8\1\A\5\6\E\2\8 ]] 00:26:04.246 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:26:04.246 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:26:04.246 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:26:04.246 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:26:04.246 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:26:04.246 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:26:04.246 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:26:04.246 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 16b3238f-691a-4b5b-a8ef-7867b690a8f6 00:26:04.246 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:26:04.246 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:26:04.246 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:26:04.246 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:26:04.246 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:26:04.246 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=16b3238f691a4b5ba8ef7867b690a8f6 00:26:04.246 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 16B3238F691A4B5BA8EF7867B690A8F6 00:26:04.246 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 16B3238F691A4B5BA8EF7867B690A8F6 == \1\6\B\3\2\3\8\F\6\9\1\A\4\B\5\B\A\8\E\F\7\8\6\7\B\6\9\0\A\8\F\6 ]] 00:26:04.246 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:26:04.246 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:26:04.246 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:26:04.246 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:26:04.246 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:26:04.246 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:26:04.246 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:26:04.246 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 216db51b-6133-48af-86f2-bbc431118851 00:26:04.246 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:26:04.246 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:26:04.246 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:26:04.246 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:26:04.246 12:24:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:26:04.246 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=216db51b613348af86f2bbc431118851 00:26:04.246 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 216DB51B613348AF86F2BBC431118851 00:26:04.246 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 216DB51B613348AF86F2BBC431118851 == \2\1\6\D\B\5\1\B\6\1\3\3\4\8\A\F\8\6\F\2\B\B\C\4\3\1\1\1\8\8\5\1 ]] 00:26:04.246 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:26:04.511 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:26:04.511 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:26:04.511 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 84933 00:26:04.511 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 84933 ']' 00:26:04.511 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 84933 00:26:04.511 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:26:04.511 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:04.511 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84933 00:26:04.511 killing process with pid 84933 00:26:04.511 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:04.511 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:04.511 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84933' 00:26:04.511 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 84933 00:26:04.511 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 84933 00:26:04.786 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:26:04.786 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:04.786 12:25:00 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:26:05.352 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:05.352 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:26:05.352 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:05.352 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:05.352 rmmod nvme_tcp 00:26:05.352 rmmod nvme_fabrics 00:26:05.352 rmmod nvme_keyring 00:26:05.352 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:05.352 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:26:05.352 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:26:05.352 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 84889 ']' 00:26:05.352 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 84889 00:26:05.352 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 84889 ']' 00:26:05.352 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 84889 00:26:05.352 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:26:05.352 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:05.352 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84889 00:26:05.352 killing process with pid 84889 00:26:05.352 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:05.352 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:05.352 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84889' 00:26:05.352 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 84889 00:26:05.352 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 84889 00:26:05.610 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:05.610 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:05.610 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:05.610 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:26:05.610 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:26:05.610 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:05.610 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:26:05.610 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:05.610 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:05.611 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:05.611 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:05.611 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:05.611 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:05.611 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:05.611 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:05.611 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:05.611 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:05.611 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:05.611 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:05.611 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:05.611 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:05.611 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:05.611 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:05.611 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:05.611 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:05.611 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:05.869 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:26:05.869 00:26:05.869 real 0m5.139s 00:26:05.869 user 0m7.454s 00:26:05.869 sys 0m0.999s 00:26:05.869 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:05.869 ************************************ 00:26:05.869 END TEST nvmf_nsid 00:26:05.869 ************************************ 00:26:05.869 12:25:01 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:26:05.869 12:25:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:26:05.869 ************************************ 00:26:05.869 END TEST nvmf_target_extra 00:26:05.869 ************************************ 00:26:05.869 00:26:05.869 real 6m29.395s 00:26:05.869 user 15m49.738s 00:26:05.869 sys 1m4.366s 00:26:05.869 12:25:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:05.869 12:25:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:05.869 12:25:01 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:26:05.869 12:25:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:05.869 12:25:01 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:05.869 12:25:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:05.869 ************************************ 00:26:05.869 START TEST nvmf_host 00:26:05.869 ************************************ 00:26:05.869 12:25:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:26:05.869 * Looking for test storage... 00:26:05.869 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:26:05.869 12:25:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:05.869 12:25:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:26:05.869 12:25:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:05.869 12:25:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:05.869 12:25:01 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:06.129 12:25:01 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:06.129 12:25:01 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:06.129 12:25:01 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:06.129 12:25:01 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:06.129 12:25:01 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:06.129 12:25:01 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:06.129 12:25:01 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:06.129 12:25:01 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:06.129 12:25:01 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:06.129 12:25:01 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:06.129 12:25:01 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:26:06.129 12:25:01 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:26:06.129 12:25:01 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:06.129 12:25:01 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:06.129 12:25:01 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:26:06.129 12:25:01 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:26:06.129 12:25:01 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:06.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:06.130 --rc genhtml_branch_coverage=1 00:26:06.130 --rc genhtml_function_coverage=1 00:26:06.130 --rc genhtml_legend=1 00:26:06.130 --rc geninfo_all_blocks=1 00:26:06.130 --rc geninfo_unexecuted_blocks=1 00:26:06.130 00:26:06.130 ' 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:06.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:06.130 --rc genhtml_branch_coverage=1 00:26:06.130 --rc genhtml_function_coverage=1 00:26:06.130 --rc genhtml_legend=1 00:26:06.130 --rc geninfo_all_blocks=1 00:26:06.130 --rc geninfo_unexecuted_blocks=1 00:26:06.130 00:26:06.130 ' 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:06.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:06.130 --rc genhtml_branch_coverage=1 00:26:06.130 --rc genhtml_function_coverage=1 00:26:06.130 --rc genhtml_legend=1 00:26:06.130 --rc geninfo_all_blocks=1 00:26:06.130 --rc geninfo_unexecuted_blocks=1 00:26:06.130 00:26:06.130 ' 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:06.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:06.130 --rc genhtml_branch_coverage=1 00:26:06.130 --rc genhtml_function_coverage=1 00:26:06.130 --rc genhtml_legend=1 00:26:06.130 --rc geninfo_all_blocks=1 00:26:06.130 --rc geninfo_unexecuted_blocks=1 00:26:06.130 00:26:06.130 ' 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:06.130 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.130 ************************************ 00:26:06.130 START TEST nvmf_multicontroller 00:26:06.130 ************************************ 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:26:06.130 * Looking for test storage... 00:26:06.130 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:26:06.130 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:06.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:06.131 --rc genhtml_branch_coverage=1 00:26:06.131 --rc genhtml_function_coverage=1 00:26:06.131 --rc genhtml_legend=1 00:26:06.131 --rc geninfo_all_blocks=1 00:26:06.131 --rc geninfo_unexecuted_blocks=1 00:26:06.131 00:26:06.131 ' 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:06.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:06.131 --rc genhtml_branch_coverage=1 00:26:06.131 --rc genhtml_function_coverage=1 00:26:06.131 --rc genhtml_legend=1 00:26:06.131 --rc geninfo_all_blocks=1 00:26:06.131 --rc geninfo_unexecuted_blocks=1 00:26:06.131 00:26:06.131 ' 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:06.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:06.131 --rc genhtml_branch_coverage=1 00:26:06.131 --rc genhtml_function_coverage=1 00:26:06.131 --rc genhtml_legend=1 00:26:06.131 --rc geninfo_all_blocks=1 00:26:06.131 --rc geninfo_unexecuted_blocks=1 00:26:06.131 00:26:06.131 ' 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:06.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:06.131 --rc genhtml_branch_coverage=1 00:26:06.131 --rc genhtml_function_coverage=1 00:26:06.131 --rc genhtml_legend=1 00:26:06.131 --rc geninfo_all_blocks=1 00:26:06.131 --rc geninfo_unexecuted_blocks=1 00:26:06.131 00:26:06.131 ' 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:06.131 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:06.131 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:06.132 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:06.132 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:06.132 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:06.132 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:06.132 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:06.132 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:06.132 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:06.132 Cannot find device "nvmf_init_br" 00:26:06.132 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # true 00:26:06.132 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:06.132 Cannot find device "nvmf_init_br2" 00:26:06.132 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # true 00:26:06.132 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:06.132 Cannot find device "nvmf_tgt_br" 00:26:06.132 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@164 -- # true 00:26:06.132 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:06.132 Cannot find device "nvmf_tgt_br2" 00:26:06.132 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@165 -- # true 00:26:06.132 12:25:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:06.391 Cannot find device "nvmf_init_br" 00:26:06.391 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@166 -- # true 00:26:06.391 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:06.391 Cannot find device "nvmf_init_br2" 00:26:06.391 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@167 -- # true 00:26:06.391 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:06.391 Cannot find device "nvmf_tgt_br" 00:26:06.391 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@168 -- # true 00:26:06.391 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:06.391 Cannot find device "nvmf_tgt_br2" 00:26:06.391 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # true 00:26:06.391 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:06.391 Cannot find device "nvmf_br" 00:26:06.391 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@170 -- # true 00:26:06.391 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:06.391 Cannot find device "nvmf_init_if" 00:26:06.391 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # true 00:26:06.391 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:06.391 Cannot find device "nvmf_init_if2" 00:26:06.391 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # true 00:26:06.391 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:06.391 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:06.391 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@173 -- # true 00:26:06.391 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:06.391 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:06.391 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@174 -- # true 00:26:06.391 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:06.391 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:06.391 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:06.391 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:06.391 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:06.391 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:06.391 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:06.391 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:06.391 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:06.391 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:06.391 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:06.391 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:06.391 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:06.391 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:06.391 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:06.391 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:06.392 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:06.392 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:06.392 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:06.392 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:06.392 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:06.392 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:06.392 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:06.392 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:06.392 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:06.392 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:06.392 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:06.392 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:06.392 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:06.392 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:06.392 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:06.392 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:06.392 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:06.392 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:06.392 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:26:06.392 00:26:06.392 --- 10.0.0.3 ping statistics --- 00:26:06.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:06.392 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:26:06.392 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:06.392 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:06.392 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.076 ms 00:26:06.392 00:26:06.392 --- 10.0.0.4 ping statistics --- 00:26:06.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:06.392 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:26:06.392 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:06.392 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:06.392 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:26:06.392 00:26:06.392 --- 10.0.0.1 ping statistics --- 00:26:06.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:06.392 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:26:06.392 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:06.392 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:06.392 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:26:06.392 00:26:06.392 --- 10.0.0.2 ping statistics --- 00:26:06.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:06.392 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:26:06.392 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:06.392 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@461 -- # return 0 00:26:06.392 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:06.392 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:06.392 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:06.392 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:06.392 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:06.392 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:06.392 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:06.650 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:26:06.650 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:06.650 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:06.650 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:06.650 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=85311 00:26:06.650 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 85311 00:26:06.650 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 85311 ']' 00:26:06.650 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:06.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:06.650 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:06.650 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:06.650 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:06.650 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:06.650 12:25:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:06.650 [2024-11-25 12:25:02.322462] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:26:06.650 [2024-11-25 12:25:02.322521] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:06.650 [2024-11-25 12:25:02.463446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:06.650 [2024-11-25 12:25:02.501207] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:06.650 [2024-11-25 12:25:02.501249] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:06.650 [2024-11-25 12:25:02.501256] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:06.650 [2024-11-25 12:25:02.501262] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:06.650 [2024-11-25 12:25:02.501266] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:06.650 [2024-11-25 12:25:02.502081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:06.650 [2024-11-25 12:25:02.502746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:06.650 [2024-11-25 12:25:02.502778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:07.584 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:07.584 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:26:07.584 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:07.584 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:07.584 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:07.584 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:07.584 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:07.584 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.584 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:07.584 [2024-11-25 12:25:03.245775] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:07.584 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.584 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:07.584 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.584 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:07.584 Malloc0 00:26:07.584 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.584 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:07.584 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.584 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:07.584 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.584 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:07.584 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.584 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:07.584 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.584 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:07.584 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.584 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:07.584 [2024-11-25 12:25:03.299365] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:07.584 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.584 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:26:07.584 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.584 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:07.584 [2024-11-25 12:25:03.307312] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:26:07.584 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.584 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:07.584 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.584 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:07.584 Malloc1 00:26:07.584 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.584 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:26:07.584 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.584 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:07.584 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.584 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:26:07.584 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.584 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:07.584 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.584 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:26:07.584 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.584 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:07.584 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.585 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4421 00:26:07.585 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.585 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:07.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:07.585 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.585 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=85363 00:26:07.585 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:07.585 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 85363 /var/tmp/bdevperf.sock 00:26:07.585 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 85363 ']' 00:26:07.585 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:07.585 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:07.585 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:07.585 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:07.585 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:07.585 12:25:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:26:08.519 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:08.519 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:26:08.519 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:26:08.519 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.519 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:08.519 NVMe0n1 00:26:08.519 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.519 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:08.519 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.519 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:08.519 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:26:08.519 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.519 1 00:26:08.519 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:26:08.519 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:26:08.519 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:26:08.519 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:08.519 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:08.519 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:08.519 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:08.519 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:26:08.519 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.519 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:08.519 2024/11/25 12:25:04 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 hostnqn:nqn.2021-09-7.io.spdk:00001 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:26:08.519 request: 00:26:08.519 { 00:26:08.519 "method": "bdev_nvme_attach_controller", 00:26:08.519 "params": { 00:26:08.519 "name": "NVMe0", 00:26:08.519 "trtype": "tcp", 00:26:08.519 "traddr": "10.0.0.3", 00:26:08.519 "adrfam": "ipv4", 00:26:08.519 "trsvcid": "4420", 00:26:08.519 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:08.519 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:26:08.519 "hostaddr": "10.0.0.1", 00:26:08.519 "prchk_reftag": false, 00:26:08.519 "prchk_guard": false, 00:26:08.519 "hdgst": false, 00:26:08.519 "ddgst": false, 00:26:08.519 "allow_unrecognized_csi": false 00:26:08.519 } 00:26:08.519 } 00:26:08.519 Got JSON-RPC error response 00:26:08.519 GoRPCClient: error on JSON-RPC call 00:26:08.519 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:08.519 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:26:08.519 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:08.519 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:08.519 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:08.519 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:26:08.519 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:26:08.519 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:26:08.519 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:08.519 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:08.519 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:08.519 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:08.519 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:26:08.519 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.519 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:08.519 request: 00:26:08.519 2024/11/25 12:25:04 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:26:08.519 { 00:26:08.519 "method": "bdev_nvme_attach_controller", 00:26:08.519 "params": { 00:26:08.519 "name": "NVMe0", 00:26:08.519 "trtype": "tcp", 00:26:08.519 "traddr": "10.0.0.3", 00:26:08.519 "adrfam": "ipv4", 00:26:08.520 "trsvcid": "4420", 00:26:08.520 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:08.520 "hostaddr": "10.0.0.1", 00:26:08.778 "prchk_reftag": false, 00:26:08.778 "prchk_guard": false, 00:26:08.778 "hdgst": false, 00:26:08.778 "ddgst": false, 00:26:08.778 "allow_unrecognized_csi": false 00:26:08.778 } 00:26:08.778 } 00:26:08.778 Got JSON-RPC error response 00:26:08.778 GoRPCClient: error on JSON-RPC call 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:08.778 2024/11/25 12:25:04 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 multipath:disable name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:26:08.778 request: 00:26:08.778 { 00:26:08.778 "method": "bdev_nvme_attach_controller", 00:26:08.778 "params": { 00:26:08.778 "name": "NVMe0", 00:26:08.778 "trtype": "tcp", 00:26:08.778 "traddr": "10.0.0.3", 00:26:08.778 "adrfam": "ipv4", 00:26:08.778 "trsvcid": "4420", 00:26:08.778 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:08.778 "hostaddr": "10.0.0.1", 00:26:08.778 "prchk_reftag": false, 00:26:08.778 "prchk_guard": false, 00:26:08.778 "hdgst": false, 00:26:08.778 "ddgst": false, 00:26:08.778 "multipath": "disable", 00:26:08.778 "allow_unrecognized_csi": false 00:26:08.778 } 00:26:08.778 } 00:26:08.778 Got JSON-RPC error response 00:26:08.778 GoRPCClient: error on JSON-RPC call 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:08.778 2024/11/25 12:25:04 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostaddr:10.0.0.1 multipath:failover name:NVMe0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:26:08.778 request: 00:26:08.778 { 00:26:08.778 "method": "bdev_nvme_attach_controller", 00:26:08.778 "params": { 00:26:08.778 "name": "NVMe0", 00:26:08.778 "trtype": "tcp", 00:26:08.778 "traddr": "10.0.0.3", 00:26:08.778 "adrfam": "ipv4", 00:26:08.778 "trsvcid": "4420", 00:26:08.778 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:08.778 "hostaddr": "10.0.0.1", 00:26:08.778 "prchk_reftag": false, 00:26:08.778 "prchk_guard": false, 00:26:08.778 "hdgst": false, 00:26:08.778 "ddgst": false, 00:26:08.778 "multipath": "failover", 00:26:08.778 "allow_unrecognized_csi": false 00:26:08.778 } 00:26:08.778 } 00:26:08.778 Got JSON-RPC error response 00:26:08.778 GoRPCClient: error on JSON-RPC call 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:08.778 NVMe0n1 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:08.778 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:26:08.778 12:25:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:10.153 { 00:26:10.153 "results": [ 00:26:10.153 { 00:26:10.153 "job": "NVMe0n1", 00:26:10.153 "core_mask": "0x1", 00:26:10.153 "workload": "write", 00:26:10.153 "status": "finished", 00:26:10.153 "queue_depth": 128, 00:26:10.153 "io_size": 4096, 00:26:10.153 "runtime": 1.004601, 00:26:10.153 "iops": 21075.033769625952, 00:26:10.153 "mibps": 82.32435066260138, 00:26:10.153 "io_failed": 0, 00:26:10.153 "io_timeout": 0, 00:26:10.153 "avg_latency_us": 6064.778958566467, 00:26:10.153 "min_latency_us": 2785.28, 00:26:10.153 "max_latency_us": 18047.606153846154 00:26:10.153 } 00:26:10.153 ], 00:26:10.153 "core_count": 1 00:26:10.153 } 00:26:10.153 12:25:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:26:10.153 12:25:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.153 12:25:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:10.153 12:25:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.153 12:25:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n 10.0.0.2 ]] 00:26:10.153 12:25:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme1 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:26:10.153 12:25:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.153 12:25:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:10.153 nvme1n1 00:26:10.153 12:25:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.153 12:25:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2016-06.io.spdk:cnode2 00:26:10.153 12:25:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.153 12:25:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # jq -r '.[].peer_address.traddr' 00:26:10.153 12:25:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:10.153 12:25:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.153 12:25:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@106 -- # [[ 10.0.0.1 == \1\0\.\0\.\0\.\1 ]] 00:26:10.153 12:25:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller nvme1 00:26:10.153 12:25:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.153 12:25:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:10.153 12:25:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.153 12:25:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@109 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme1 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 00:26:10.153 12:25:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.153 12:25:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:10.153 nvme1n1 00:26:10.153 12:25:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.153 12:25:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2016-06.io.spdk:cnode2 00:26:10.153 12:25:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.153 12:25:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:10.153 12:25:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # jq -r '.[].peer_address.traddr' 00:26:10.153 12:25:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.153 12:25:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # [[ 10.0.0.2 == \1\0\.\0\.\0\.\2 ]] 00:26:10.153 12:25:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 85363 00:26:10.153 12:25:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 85363 ']' 00:26:10.153 12:25:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 85363 00:26:10.153 12:25:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:26:10.153 12:25:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:10.154 12:25:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85363 00:26:10.154 killing process with pid 85363 00:26:10.154 12:25:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:10.154 12:25:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:10.154 12:25:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85363' 00:26:10.154 12:25:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 85363 00:26:10.154 12:25:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 85363 00:26:10.412 12:25:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:10.412 12:25:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.412 12:25:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:10.412 12:25:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.412 12:25:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:10.412 12:25:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:10.412 12:25:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:10.412 12:25:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:10.412 12:25:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:26:10.412 12:25:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:10.412 12:25:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:26:10.412 12:25:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:26:10.412 12:25:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:26:10.412 12:25:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:26:10.412 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:26:10.412 [2024-11-25 12:25:03.399837] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:26:10.412 [2024-11-25 12:25:03.399910] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85363 ] 00:26:10.412 [2024-11-25 12:25:03.540023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:10.412 [2024-11-25 12:25:03.578048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:10.412 [2024-11-25 12:25:04.567822] bdev.c:4696:bdev_name_add: *ERROR*: Bdev name ca8f811c-7723-4766-beaa-7805a3930542 already exists 00:26:10.412 [2024-11-25 12:25:04.567880] bdev.c:7832:bdev_register: *ERROR*: Unable to add uuid:ca8f811c-7723-4766-beaa-7805a3930542 alias for bdev NVMe1n1 00:26:10.412 [2024-11-25 12:25:04.567891] bdev_nvme.c:4659:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:26:10.412 Running I/O for 1 seconds... 00:26:10.412 21044.00 IOPS, 82.20 MiB/s 00:26:10.412 Latency(us) 00:26:10.412 [2024-11-25T12:25:06.288Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:10.412 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:26:10.412 NVMe0n1 : 1.00 21075.03 82.32 0.00 0.00 6064.78 2785.28 18047.61 00:26:10.412 [2024-11-25T12:25:06.288Z] =================================================================================================================== 00:26:10.412 [2024-11-25T12:25:06.288Z] Total : 21075.03 82.32 0.00 0.00 6064.78 2785.28 18047.61 00:26:10.412 Received shutdown signal, test time was about 1.000000 seconds 00:26:10.412 00:26:10.413 Latency(us) 00:26:10.413 [2024-11-25T12:25:06.289Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:10.413 [2024-11-25T12:25:06.289Z] =================================================================================================================== 00:26:10.413 [2024-11-25T12:25:06.289Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:10.413 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:26:10.413 12:25:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:10.413 12:25:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:26:10.413 12:25:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:26:10.413 12:25:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:10.413 12:25:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:26:12.313 12:25:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:12.313 12:25:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:26:12.313 12:25:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:12.313 12:25:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:12.313 rmmod nvme_tcp 00:26:12.313 rmmod nvme_fabrics 00:26:12.313 rmmod nvme_keyring 00:26:12.313 12:25:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:12.313 12:25:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:26:12.313 12:25:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:26:12.313 12:25:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 85311 ']' 00:26:12.313 12:25:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 85311 00:26:12.313 12:25:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 85311 ']' 00:26:12.313 12:25:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 85311 00:26:12.313 12:25:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:26:12.313 12:25:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:12.313 12:25:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85311 00:26:12.313 killing process with pid 85311 00:26:12.313 12:25:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:12.313 12:25:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:12.313 12:25:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85311' 00:26:12.313 12:25:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 85311 00:26:12.313 12:25:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 85311 00:26:12.313 12:25:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:12.313 12:25:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:12.313 12:25:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:12.313 12:25:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:26:12.572 12:25:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:26:12.572 12:25:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:26:12.572 12:25:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:12.572 12:25:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:12.572 12:25:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:12.572 12:25:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:12.572 12:25:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:12.572 12:25:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:12.572 12:25:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:12.572 12:25:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:12.572 12:25:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:12.572 12:25:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:12.572 12:25:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:12.572 12:25:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:12.572 12:25:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:12.572 12:25:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:12.572 12:25:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:12.572 12:25:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:12.572 12:25:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:12.572 12:25:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:12.572 12:25:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:12.572 12:25:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:12.572 12:25:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@300 -- # return 0 00:26:12.572 00:26:12.572 real 0m6.627s 00:26:12.572 user 0m20.262s 00:26:12.572 sys 0m0.910s 00:26:12.572 12:25:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:12.572 ************************************ 00:26:12.572 END TEST nvmf_multicontroller 00:26:12.572 ************************************ 00:26:12.572 12:25:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:12.832 ************************************ 00:26:12.832 START TEST nvmf_aer 00:26:12.832 ************************************ 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:12.832 * Looking for test storage... 00:26:12.832 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:12.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.832 --rc genhtml_branch_coverage=1 00:26:12.832 --rc genhtml_function_coverage=1 00:26:12.832 --rc genhtml_legend=1 00:26:12.832 --rc geninfo_all_blocks=1 00:26:12.832 --rc geninfo_unexecuted_blocks=1 00:26:12.832 00:26:12.832 ' 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:12.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.832 --rc genhtml_branch_coverage=1 00:26:12.832 --rc genhtml_function_coverage=1 00:26:12.832 --rc genhtml_legend=1 00:26:12.832 --rc geninfo_all_blocks=1 00:26:12.832 --rc geninfo_unexecuted_blocks=1 00:26:12.832 00:26:12.832 ' 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:12.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.832 --rc genhtml_branch_coverage=1 00:26:12.832 --rc genhtml_function_coverage=1 00:26:12.832 --rc genhtml_legend=1 00:26:12.832 --rc geninfo_all_blocks=1 00:26:12.832 --rc geninfo_unexecuted_blocks=1 00:26:12.832 00:26:12.832 ' 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:12.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.832 --rc genhtml_branch_coverage=1 00:26:12.832 --rc genhtml_function_coverage=1 00:26:12.832 --rc genhtml_legend=1 00:26:12.832 --rc geninfo_all_blocks=1 00:26:12.832 --rc geninfo_unexecuted_blocks=1 00:26:12.832 00:26:12.832 ' 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:12.832 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:12.833 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:12.833 Cannot find device "nvmf_init_br" 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # true 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:12.833 Cannot find device "nvmf_init_br2" 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # true 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:12.833 Cannot find device "nvmf_tgt_br" 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@164 -- # true 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:12.833 Cannot find device "nvmf_tgt_br2" 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@165 -- # true 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:12.833 Cannot find device "nvmf_init_br" 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@166 -- # true 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:12.833 Cannot find device "nvmf_init_br2" 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@167 -- # true 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:12.833 Cannot find device "nvmf_tgt_br" 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@168 -- # true 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:12.833 Cannot find device "nvmf_tgt_br2" 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # true 00:26:12.833 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:13.092 Cannot find device "nvmf_br" 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@170 -- # true 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:13.092 Cannot find device "nvmf_init_if" 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # true 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:13.092 Cannot find device "nvmf_init_if2" 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # true 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:13.092 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@173 -- # true 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:13.092 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@174 -- # true 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:13.092 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:13.092 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:26:13.092 00:26:13.092 --- 10.0.0.3 ping statistics --- 00:26:13.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.092 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:13.092 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:13.092 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:26:13.092 00:26:13.092 --- 10.0.0.4 ping statistics --- 00:26:13.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.092 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:13.092 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:13.092 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:26:13.092 00:26:13.092 --- 10.0.0.1 ping statistics --- 00:26:13.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.092 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:13.092 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:13.092 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:26:13.092 00:26:13.092 --- 10.0.0.2 ping statistics --- 00:26:13.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.092 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@461 -- # return 0 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:13.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=85689 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 85689 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 85689 ']' 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:13.092 12:25:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:13.092 [2024-11-25 12:25:08.955574] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:26:13.092 [2024-11-25 12:25:08.955632] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:13.351 [2024-11-25 12:25:09.096292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:13.351 [2024-11-25 12:25:09.132739] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:13.351 [2024-11-25 12:25:09.132918] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:13.351 [2024-11-25 12:25:09.132996] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:13.351 [2024-11-25 12:25:09.133035] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:13.351 [2024-11-25 12:25:09.133092] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:13.351 [2024-11-25 12:25:09.133825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:13.351 [2024-11-25 12:25:09.133900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:13.351 [2024-11-25 12:25:09.133958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.351 [2024-11-25 12:25:09.133958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:14.286 12:25:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:14.286 12:25:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:26:14.286 12:25:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:14.286 12:25:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:14.286 12:25:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:14.286 12:25:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:14.286 12:25:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:14.286 12:25:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.286 12:25:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:14.286 [2024-11-25 12:25:09.865187] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:14.286 12:25:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.286 12:25:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:26:14.286 12:25:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.286 12:25:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:14.286 Malloc0 00:26:14.286 12:25:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.286 12:25:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:26:14.286 12:25:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.286 12:25:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:14.286 12:25:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.286 12:25:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:14.286 12:25:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.286 12:25:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:14.286 12:25:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.286 12:25:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:14.286 12:25:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.286 12:25:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:14.286 [2024-11-25 12:25:09.923011] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:14.286 12:25:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.286 12:25:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:26:14.286 12:25:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.286 12:25:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:14.286 [ 00:26:14.286 { 00:26:14.286 "allow_any_host": true, 00:26:14.286 "hosts": [], 00:26:14.286 "listen_addresses": [], 00:26:14.286 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:14.286 "subtype": "Discovery" 00:26:14.286 }, 00:26:14.286 { 00:26:14.286 "allow_any_host": true, 00:26:14.286 "hosts": [], 00:26:14.286 "listen_addresses": [ 00:26:14.286 { 00:26:14.286 "adrfam": "IPv4", 00:26:14.286 "traddr": "10.0.0.3", 00:26:14.286 "trsvcid": "4420", 00:26:14.286 "trtype": "TCP" 00:26:14.286 } 00:26:14.286 ], 00:26:14.286 "max_cntlid": 65519, 00:26:14.286 "max_namespaces": 2, 00:26:14.286 "min_cntlid": 1, 00:26:14.286 "model_number": "SPDK bdev Controller", 00:26:14.286 "namespaces": [ 00:26:14.286 { 00:26:14.286 "bdev_name": "Malloc0", 00:26:14.286 "name": "Malloc0", 00:26:14.286 "nguid": "FF3F4EAC593644D6981CC58713442F36", 00:26:14.286 "nsid": 1, 00:26:14.286 "uuid": "ff3f4eac-5936-44d6-981c-c58713442f36" 00:26:14.286 } 00:26:14.286 ], 00:26:14.286 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:14.286 "serial_number": "SPDK00000000000001", 00:26:14.286 "subtype": "NVMe" 00:26:14.286 } 00:26:14.286 ] 00:26:14.286 12:25:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.286 12:25:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:26:14.286 12:25:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:26:14.286 12:25:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=85743 00:26:14.286 12:25:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:26:14.286 12:25:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:26:14.286 12:25:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:14.286 12:25:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:26:14.286 12:25:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:26:14.286 12:25:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:26:14.286 12:25:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:26:14.286 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:14.286 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:26:14.286 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:26:14.286 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:26:14.286 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:14.286 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:14.286 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:26:14.286 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:26:14.286 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.286 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:14.544 Malloc1 00:26:14.544 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.544 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:26:14.544 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.544 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:14.544 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.544 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:26:14.544 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.544 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:14.544 Asynchronous Event Request test 00:26:14.544 Attaching to 10.0.0.3 00:26:14.544 Attached to 10.0.0.3 00:26:14.544 Registering asynchronous event callbacks... 00:26:14.544 Starting namespace attribute notice tests for all controllers... 00:26:14.544 10.0.0.3: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:26:14.544 aer_cb - Changed Namespace 00:26:14.544 Cleaning up... 00:26:14.544 [ 00:26:14.544 { 00:26:14.544 "allow_any_host": true, 00:26:14.544 "hosts": [], 00:26:14.544 "listen_addresses": [], 00:26:14.544 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:14.544 "subtype": "Discovery" 00:26:14.544 }, 00:26:14.544 { 00:26:14.544 "allow_any_host": true, 00:26:14.544 "hosts": [], 00:26:14.544 "listen_addresses": [ 00:26:14.544 { 00:26:14.544 "adrfam": "IPv4", 00:26:14.544 "traddr": "10.0.0.3", 00:26:14.544 "trsvcid": "4420", 00:26:14.544 "trtype": "TCP" 00:26:14.544 } 00:26:14.544 ], 00:26:14.544 "max_cntlid": 65519, 00:26:14.544 "max_namespaces": 2, 00:26:14.544 "min_cntlid": 1, 00:26:14.544 "model_number": "SPDK bdev Controller", 00:26:14.544 "namespaces": [ 00:26:14.544 { 00:26:14.544 "bdev_name": "Malloc0", 00:26:14.544 "name": "Malloc0", 00:26:14.544 "nguid": "FF3F4EAC593644D6981CC58713442F36", 00:26:14.544 "nsid": 1, 00:26:14.544 "uuid": "ff3f4eac-5936-44d6-981c-c58713442f36" 00:26:14.544 }, 00:26:14.544 { 00:26:14.544 "bdev_name": "Malloc1", 00:26:14.544 "name": "Malloc1", 00:26:14.544 "nguid": "7BCF00336EC0400687FBB0C6CC9C7EDA", 00:26:14.544 "nsid": 2, 00:26:14.544 "uuid": "7bcf0033-6ec0-4006-87fb-b0c6cc9c7eda" 00:26:14.544 } 00:26:14.544 ], 00:26:14.544 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:14.544 "serial_number": "SPDK00000000000001", 00:26:14.544 "subtype": "NVMe" 00:26:14.544 } 00:26:14.544 ] 00:26:14.544 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.544 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 85743 00:26:14.544 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:26:14.544 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.545 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:14.545 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.545 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:26:14.545 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.545 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:14.545 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.545 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:14.545 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.545 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:14.545 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.545 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:26:14.545 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:26:14.545 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:14.545 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:26:14.545 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:14.545 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:26:14.545 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:14.545 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:14.545 rmmod nvme_tcp 00:26:14.545 rmmod nvme_fabrics 00:26:14.545 rmmod nvme_keyring 00:26:14.545 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:14.545 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:26:14.545 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:26:14.545 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 85689 ']' 00:26:14.545 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 85689 00:26:14.545 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 85689 ']' 00:26:14.545 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 85689 00:26:14.545 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:26:14.545 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:14.545 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85689 00:26:14.545 killing process with pid 85689 00:26:14.545 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:14.545 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:14.545 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85689' 00:26:14.545 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 85689 00:26:14.545 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 85689 00:26:14.802 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:14.802 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:14.802 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:14.802 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:26:14.802 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:26:14.802 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:26:14.802 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:14.802 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:14.802 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:14.802 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:14.802 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:14.802 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:14.802 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:14.802 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:14.802 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:14.802 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:14.802 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:14.802 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:14.802 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:14.802 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:14.802 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:14.802 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:15.061 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:15.061 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:15.061 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:15.061 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:15.061 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@300 -- # return 0 00:26:15.061 00:26:15.061 real 0m2.241s 00:26:15.061 user 0m5.713s 00:26:15.061 sys 0m0.512s 00:26:15.061 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:15.061 12:25:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:26:15.061 ************************************ 00:26:15.061 END TEST nvmf_aer 00:26:15.061 ************************************ 00:26:15.061 12:25:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:15.061 12:25:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:15.061 12:25:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:15.061 12:25:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.061 ************************************ 00:26:15.061 START TEST nvmf_async_init 00:26:15.061 ************************************ 00:26:15.061 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:15.061 * Looking for test storage... 00:26:15.061 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:15.061 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:15.061 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:15.061 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:26:15.061 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:15.061 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:15.061 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:15.061 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:15.061 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:26:15.061 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:26:15.061 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:26:15.061 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:26:15.061 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:26:15.061 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:26:15.061 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:26:15.061 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:15.061 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:26:15.061 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:26:15.061 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:15.061 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:15.061 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:26:15.061 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:26:15.061 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:15.061 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:26:15.061 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:26:15.061 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:26:15.061 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:26:15.061 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:15.061 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:26:15.061 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:26:15.061 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:15.061 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:15.061 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:26:15.061 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:15.061 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:15.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.062 --rc genhtml_branch_coverage=1 00:26:15.062 --rc genhtml_function_coverage=1 00:26:15.062 --rc genhtml_legend=1 00:26:15.062 --rc geninfo_all_blocks=1 00:26:15.062 --rc geninfo_unexecuted_blocks=1 00:26:15.062 00:26:15.062 ' 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:15.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.062 --rc genhtml_branch_coverage=1 00:26:15.062 --rc genhtml_function_coverage=1 00:26:15.062 --rc genhtml_legend=1 00:26:15.062 --rc geninfo_all_blocks=1 00:26:15.062 --rc geninfo_unexecuted_blocks=1 00:26:15.062 00:26:15.062 ' 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:15.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.062 --rc genhtml_branch_coverage=1 00:26:15.062 --rc genhtml_function_coverage=1 00:26:15.062 --rc genhtml_legend=1 00:26:15.062 --rc geninfo_all_blocks=1 00:26:15.062 --rc geninfo_unexecuted_blocks=1 00:26:15.062 00:26:15.062 ' 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:15.062 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:15.062 --rc genhtml_branch_coverage=1 00:26:15.062 --rc genhtml_function_coverage=1 00:26:15.062 --rc genhtml_legend=1 00:26:15.062 --rc geninfo_all_blocks=1 00:26:15.062 --rc geninfo_unexecuted_blocks=1 00:26:15.062 00:26:15.062 ' 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:15.062 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=e7693897af3d43539478c4fa53fff578 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:15.062 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:15.063 Cannot find device "nvmf_init_br" 00:26:15.063 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # true 00:26:15.063 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:15.063 Cannot find device "nvmf_init_br2" 00:26:15.063 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # true 00:26:15.063 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:15.321 Cannot find device "nvmf_tgt_br" 00:26:15.321 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@164 -- # true 00:26:15.321 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:15.321 Cannot find device "nvmf_tgt_br2" 00:26:15.321 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@165 -- # true 00:26:15.321 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:15.321 Cannot find device "nvmf_init_br" 00:26:15.321 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@166 -- # true 00:26:15.321 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:15.321 Cannot find device "nvmf_init_br2" 00:26:15.321 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@167 -- # true 00:26:15.321 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:15.321 Cannot find device "nvmf_tgt_br" 00:26:15.321 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@168 -- # true 00:26:15.321 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:15.321 Cannot find device "nvmf_tgt_br2" 00:26:15.321 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # true 00:26:15.321 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:15.321 Cannot find device "nvmf_br" 00:26:15.321 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@170 -- # true 00:26:15.321 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:15.321 Cannot find device "nvmf_init_if" 00:26:15.321 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # true 00:26:15.321 12:25:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:15.321 Cannot find device "nvmf_init_if2" 00:26:15.321 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # true 00:26:15.321 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:15.321 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:15.321 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@173 -- # true 00:26:15.321 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:15.321 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:15.321 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@174 -- # true 00:26:15.321 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:15.321 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:15.321 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:15.321 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:15.321 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:15.321 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:15.321 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:15.321 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:15.321 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:15.321 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:15.321 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:15.321 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:15.321 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:15.321 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:15.321 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:15.321 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:15.321 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:15.321 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:15.321 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:15.321 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:15.321 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:15.321 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:15.321 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:15.321 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:15.322 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:15.322 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:15.322 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:15.322 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:15.322 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:15.322 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:15.322 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:15.322 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:15.322 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:15.322 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:15.322 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:26:15.322 00:26:15.322 --- 10.0.0.3 ping statistics --- 00:26:15.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:15.322 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:26:15.322 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:15.580 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:15.580 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:26:15.580 00:26:15.580 --- 10.0.0.4 ping statistics --- 00:26:15.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:15.580 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:26:15.580 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:15.580 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:15.580 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:26:15.580 00:26:15.580 --- 10.0.0.1 ping statistics --- 00:26:15.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:15.580 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:26:15.580 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:15.580 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:15.580 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:26:15.580 00:26:15.580 --- 10.0.0.2 ping statistics --- 00:26:15.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:15.580 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:26:15.580 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:15.580 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@461 -- # return 0 00:26:15.580 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:15.580 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:15.580 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:15.580 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:15.580 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:15.580 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:15.580 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:15.580 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:26:15.580 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:15.580 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:15.580 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:15.580 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=85964 00:26:15.580 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:26:15.580 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 85964 00:26:15.580 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 85964 ']' 00:26:15.580 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:15.580 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:15.580 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:15.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:15.580 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:15.580 12:25:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:15.580 [2024-11-25 12:25:11.259747] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:26:15.581 [2024-11-25 12:25:11.259809] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:15.581 [2024-11-25 12:25:11.398012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:15.581 [2024-11-25 12:25:11.432715] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:15.581 [2024-11-25 12:25:11.432758] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:15.581 [2024-11-25 12:25:11.432765] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:15.581 [2024-11-25 12:25:11.432769] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:15.581 [2024-11-25 12:25:11.432774] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:15.581 [2024-11-25 12:25:11.433022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:16.516 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:16.516 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:26:16.516 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:16.516 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:16.516 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:16.516 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:16.516 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:16.516 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.516 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:16.516 [2024-11-25 12:25:12.128684] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:16.516 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.516 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:26:16.516 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.516 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:16.516 null0 00:26:16.516 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.516 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:26:16.516 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.516 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:16.516 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.516 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:26:16.516 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.516 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:16.516 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.516 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g e7693897af3d43539478c4fa53fff578 00:26:16.516 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.516 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:16.516 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.516 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:26:16.516 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.516 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:16.516 [2024-11-25 12:25:12.168768] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:16.516 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.516 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:26:16.516 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.516 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:16.775 nvme0n1 00:26:16.775 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.775 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:16.775 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.775 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:16.775 [ 00:26:16.775 { 00:26:16.775 "aliases": [ 00:26:16.775 "e7693897-af3d-4353-9478-c4fa53fff578" 00:26:16.775 ], 00:26:16.775 "assigned_rate_limits": { 00:26:16.775 "r_mbytes_per_sec": 0, 00:26:16.775 "rw_ios_per_sec": 0, 00:26:16.775 "rw_mbytes_per_sec": 0, 00:26:16.775 "w_mbytes_per_sec": 0 00:26:16.775 }, 00:26:16.775 "block_size": 512, 00:26:16.775 "claimed": false, 00:26:16.775 "driver_specific": { 00:26:16.775 "mp_policy": "active_passive", 00:26:16.775 "nvme": [ 00:26:16.775 { 00:26:16.775 "ctrlr_data": { 00:26:16.775 "ana_reporting": false, 00:26:16.775 "cntlid": 1, 00:26:16.775 "firmware_revision": "25.01", 00:26:16.775 "model_number": "SPDK bdev Controller", 00:26:16.775 "multi_ctrlr": true, 00:26:16.775 "oacs": { 00:26:16.775 "firmware": 0, 00:26:16.775 "format": 0, 00:26:16.775 "ns_manage": 0, 00:26:16.775 "security": 0 00:26:16.775 }, 00:26:16.775 "serial_number": "00000000000000000000", 00:26:16.775 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:16.775 "vendor_id": "0x8086" 00:26:16.775 }, 00:26:16.775 "ns_data": { 00:26:16.775 "can_share": true, 00:26:16.775 "id": 1 00:26:16.775 }, 00:26:16.775 "trid": { 00:26:16.775 "adrfam": "IPv4", 00:26:16.775 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:16.775 "traddr": "10.0.0.3", 00:26:16.775 "trsvcid": "4420", 00:26:16.775 "trtype": "TCP" 00:26:16.775 }, 00:26:16.775 "vs": { 00:26:16.775 "nvme_version": "1.3" 00:26:16.775 } 00:26:16.775 } 00:26:16.775 ] 00:26:16.775 }, 00:26:16.775 "memory_domains": [ 00:26:16.775 { 00:26:16.775 "dma_device_id": "system", 00:26:16.775 "dma_device_type": 1 00:26:16.775 } 00:26:16.775 ], 00:26:16.775 "name": "nvme0n1", 00:26:16.775 "num_blocks": 2097152, 00:26:16.775 "numa_id": -1, 00:26:16.775 "product_name": "NVMe disk", 00:26:16.775 "supported_io_types": { 00:26:16.775 "abort": true, 00:26:16.775 "compare": true, 00:26:16.775 "compare_and_write": true, 00:26:16.775 "copy": true, 00:26:16.775 "flush": true, 00:26:16.775 "get_zone_info": false, 00:26:16.775 "nvme_admin": true, 00:26:16.775 "nvme_io": true, 00:26:16.775 "nvme_io_md": false, 00:26:16.775 "nvme_iov_md": false, 00:26:16.775 "read": true, 00:26:16.775 "reset": true, 00:26:16.775 "seek_data": false, 00:26:16.775 "seek_hole": false, 00:26:16.775 "unmap": false, 00:26:16.775 "write": true, 00:26:16.775 "write_zeroes": true, 00:26:16.775 "zcopy": false, 00:26:16.775 "zone_append": false, 00:26:16.775 "zone_management": false 00:26:16.775 }, 00:26:16.775 "uuid": "e7693897-af3d-4353-9478-c4fa53fff578", 00:26:16.775 "zoned": false 00:26:16.775 } 00:26:16.775 ] 00:26:16.775 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.775 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:26:16.775 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.775 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:16.775 [2024-11-25 12:25:12.421118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:16.775 [2024-11-25 12:25:12.421182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2618680 (9): Bad file descriptor 00:26:16.775 [2024-11-25 12:25:12.593852] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:26:16.775 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.775 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:16.775 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.775 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:16.775 [ 00:26:16.775 { 00:26:16.775 "aliases": [ 00:26:16.775 "e7693897-af3d-4353-9478-c4fa53fff578" 00:26:16.775 ], 00:26:16.775 "assigned_rate_limits": { 00:26:16.775 "r_mbytes_per_sec": 0, 00:26:16.775 "rw_ios_per_sec": 0, 00:26:16.775 "rw_mbytes_per_sec": 0, 00:26:16.775 "w_mbytes_per_sec": 0 00:26:16.775 }, 00:26:16.775 "block_size": 512, 00:26:16.776 "claimed": false, 00:26:16.776 "driver_specific": { 00:26:16.776 "mp_policy": "active_passive", 00:26:16.776 "nvme": [ 00:26:16.776 { 00:26:16.776 "ctrlr_data": { 00:26:16.776 "ana_reporting": false, 00:26:16.776 "cntlid": 2, 00:26:16.776 "firmware_revision": "25.01", 00:26:16.776 "model_number": "SPDK bdev Controller", 00:26:16.776 "multi_ctrlr": true, 00:26:16.776 "oacs": { 00:26:16.776 "firmware": 0, 00:26:16.776 "format": 0, 00:26:16.776 "ns_manage": 0, 00:26:16.776 "security": 0 00:26:16.776 }, 00:26:16.776 "serial_number": "00000000000000000000", 00:26:16.776 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:16.776 "vendor_id": "0x8086" 00:26:16.776 }, 00:26:16.776 "ns_data": { 00:26:16.776 "can_share": true, 00:26:16.776 "id": 1 00:26:16.776 }, 00:26:16.776 "trid": { 00:26:16.776 "adrfam": "IPv4", 00:26:16.776 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:16.776 "traddr": "10.0.0.3", 00:26:16.776 "trsvcid": "4420", 00:26:16.776 "trtype": "TCP" 00:26:16.776 }, 00:26:16.776 "vs": { 00:26:16.776 "nvme_version": "1.3" 00:26:16.776 } 00:26:16.776 } 00:26:16.776 ] 00:26:16.776 }, 00:26:16.776 "memory_domains": [ 00:26:16.776 { 00:26:16.776 "dma_device_id": "system", 00:26:16.776 "dma_device_type": 1 00:26:16.776 } 00:26:16.776 ], 00:26:16.776 "name": "nvme0n1", 00:26:16.776 "num_blocks": 2097152, 00:26:16.776 "numa_id": -1, 00:26:16.776 "product_name": "NVMe disk", 00:26:16.776 "supported_io_types": { 00:26:16.776 "abort": true, 00:26:16.776 "compare": true, 00:26:16.776 "compare_and_write": true, 00:26:16.776 "copy": true, 00:26:16.776 "flush": true, 00:26:16.776 "get_zone_info": false, 00:26:16.776 "nvme_admin": true, 00:26:16.776 "nvme_io": true, 00:26:16.776 "nvme_io_md": false, 00:26:16.776 "nvme_iov_md": false, 00:26:16.776 "read": true, 00:26:16.776 "reset": true, 00:26:16.776 "seek_data": false, 00:26:16.776 "seek_hole": false, 00:26:16.776 "unmap": false, 00:26:16.776 "write": true, 00:26:16.776 "write_zeroes": true, 00:26:16.776 "zcopy": false, 00:26:16.776 "zone_append": false, 00:26:16.776 "zone_management": false 00:26:16.776 }, 00:26:16.776 "uuid": "e7693897-af3d-4353-9478-c4fa53fff578", 00:26:16.776 "zoned": false 00:26:16.776 } 00:26:16.776 ] 00:26:16.776 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.776 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:16.776 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.776 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:16.776 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.776 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:26:16.776 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.eCHPTUfBTk 00:26:16.776 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:26:16.776 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.eCHPTUfBTk 00:26:16.776 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.eCHPTUfBTk 00:26:16.776 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.776 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:17.034 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.034 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:26:17.034 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.034 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:17.034 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.034 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 --secure-channel 00:26:17.034 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.034 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:17.034 [2024-11-25 12:25:12.665304] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:17.034 [2024-11-25 12:25:12.665414] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:26:17.034 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.034 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:26:17.034 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.034 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:17.034 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.034 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:26:17.034 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.034 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:17.034 [2024-11-25 12:25:12.681319] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:17.034 nvme0n1 00:26:17.034 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.034 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:17.034 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.034 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:17.034 [ 00:26:17.034 { 00:26:17.035 "aliases": [ 00:26:17.035 "e7693897-af3d-4353-9478-c4fa53fff578" 00:26:17.035 ], 00:26:17.035 "assigned_rate_limits": { 00:26:17.035 "r_mbytes_per_sec": 0, 00:26:17.035 "rw_ios_per_sec": 0, 00:26:17.035 "rw_mbytes_per_sec": 0, 00:26:17.035 "w_mbytes_per_sec": 0 00:26:17.035 }, 00:26:17.035 "block_size": 512, 00:26:17.035 "claimed": false, 00:26:17.035 "driver_specific": { 00:26:17.035 "mp_policy": "active_passive", 00:26:17.035 "nvme": [ 00:26:17.035 { 00:26:17.035 "ctrlr_data": { 00:26:17.035 "ana_reporting": false, 00:26:17.035 "cntlid": 3, 00:26:17.035 "firmware_revision": "25.01", 00:26:17.035 "model_number": "SPDK bdev Controller", 00:26:17.035 "multi_ctrlr": true, 00:26:17.035 "oacs": { 00:26:17.035 "firmware": 0, 00:26:17.035 "format": 0, 00:26:17.035 "ns_manage": 0, 00:26:17.035 "security": 0 00:26:17.035 }, 00:26:17.035 "serial_number": "00000000000000000000", 00:26:17.035 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:17.035 "vendor_id": "0x8086" 00:26:17.035 }, 00:26:17.035 "ns_data": { 00:26:17.035 "can_share": true, 00:26:17.035 "id": 1 00:26:17.035 }, 00:26:17.035 "trid": { 00:26:17.035 "adrfam": "IPv4", 00:26:17.035 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:17.035 "traddr": "10.0.0.3", 00:26:17.035 "trsvcid": "4421", 00:26:17.035 "trtype": "TCP" 00:26:17.035 }, 00:26:17.035 "vs": { 00:26:17.035 "nvme_version": "1.3" 00:26:17.035 } 00:26:17.035 } 00:26:17.035 ] 00:26:17.035 }, 00:26:17.035 "memory_domains": [ 00:26:17.035 { 00:26:17.035 "dma_device_id": "system", 00:26:17.035 "dma_device_type": 1 00:26:17.035 } 00:26:17.035 ], 00:26:17.035 "name": "nvme0n1", 00:26:17.035 "num_blocks": 2097152, 00:26:17.035 "numa_id": -1, 00:26:17.035 "product_name": "NVMe disk", 00:26:17.035 "supported_io_types": { 00:26:17.035 "abort": true, 00:26:17.035 "compare": true, 00:26:17.035 "compare_and_write": true, 00:26:17.035 "copy": true, 00:26:17.035 "flush": true, 00:26:17.035 "get_zone_info": false, 00:26:17.035 "nvme_admin": true, 00:26:17.035 "nvme_io": true, 00:26:17.035 "nvme_io_md": false, 00:26:17.035 "nvme_iov_md": false, 00:26:17.035 "read": true, 00:26:17.035 "reset": true, 00:26:17.035 "seek_data": false, 00:26:17.035 "seek_hole": false, 00:26:17.035 "unmap": false, 00:26:17.035 "write": true, 00:26:17.035 "write_zeroes": true, 00:26:17.035 "zcopy": false, 00:26:17.035 "zone_append": false, 00:26:17.035 "zone_management": false 00:26:17.035 }, 00:26:17.035 "uuid": "e7693897-af3d-4353-9478-c4fa53fff578", 00:26:17.035 "zoned": false 00:26:17.035 } 00:26:17.035 ] 00:26:17.035 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.035 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:17.035 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.035 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:17.035 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.035 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.eCHPTUfBTk 00:26:17.035 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:26:17.035 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:26:17.035 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:17.035 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:26:17.035 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:17.035 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:26:17.035 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:17.035 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:17.035 rmmod nvme_tcp 00:26:17.035 rmmod nvme_fabrics 00:26:17.035 rmmod nvme_keyring 00:26:17.035 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:17.035 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:26:17.035 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:26:17.035 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 85964 ']' 00:26:17.035 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 85964 00:26:17.035 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 85964 ']' 00:26:17.035 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 85964 00:26:17.035 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:26:17.035 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:17.035 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85964 00:26:17.293 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:17.293 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:17.293 killing process with pid 85964 00:26:17.293 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85964' 00:26:17.293 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 85964 00:26:17.293 12:25:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 85964 00:26:17.293 12:25:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:17.293 12:25:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:17.293 12:25:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:17.293 12:25:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:26:17.293 12:25:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:26:17.293 12:25:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:17.293 12:25:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:26:17.293 12:25:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:17.293 12:25:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:17.293 12:25:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:17.293 12:25:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:17.293 12:25:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:17.293 12:25:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:17.293 12:25:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:17.293 12:25:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:17.294 12:25:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:17.294 12:25:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:17.294 12:25:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:17.294 12:25:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:17.553 12:25:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:17.553 12:25:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:17.553 12:25:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:17.553 12:25:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:17.553 12:25:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:17.553 12:25:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:17.553 12:25:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:17.553 12:25:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@300 -- # return 0 00:26:17.553 ************************************ 00:26:17.553 END TEST nvmf_async_init 00:26:17.553 ************************************ 00:26:17.553 00:26:17.553 real 0m2.529s 00:26:17.553 user 0m2.140s 00:26:17.553 sys 0m0.560s 00:26:17.553 12:25:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:17.553 12:25:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:26:17.553 12:25:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:26:17.553 12:25:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:17.553 12:25:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:17.553 12:25:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.553 ************************************ 00:26:17.553 START TEST dma 00:26:17.553 ************************************ 00:26:17.553 12:25:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:26:17.553 * Looking for test storage... 00:26:17.553 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:17.553 12:25:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:17.553 12:25:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:26:17.553 12:25:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:17.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.813 --rc genhtml_branch_coverage=1 00:26:17.813 --rc genhtml_function_coverage=1 00:26:17.813 --rc genhtml_legend=1 00:26:17.813 --rc geninfo_all_blocks=1 00:26:17.813 --rc geninfo_unexecuted_blocks=1 00:26:17.813 00:26:17.813 ' 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:17.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.813 --rc genhtml_branch_coverage=1 00:26:17.813 --rc genhtml_function_coverage=1 00:26:17.813 --rc genhtml_legend=1 00:26:17.813 --rc geninfo_all_blocks=1 00:26:17.813 --rc geninfo_unexecuted_blocks=1 00:26:17.813 00:26:17.813 ' 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:17.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.813 --rc genhtml_branch_coverage=1 00:26:17.813 --rc genhtml_function_coverage=1 00:26:17.813 --rc genhtml_legend=1 00:26:17.813 --rc geninfo_all_blocks=1 00:26:17.813 --rc geninfo_unexecuted_blocks=1 00:26:17.813 00:26:17.813 ' 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:17.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.813 --rc genhtml_branch_coverage=1 00:26:17.813 --rc genhtml_function_coverage=1 00:26:17.813 --rc genhtml_legend=1 00:26:17.813 --rc geninfo_all_blocks=1 00:26:17.813 --rc geninfo_unexecuted_blocks=1 00:26:17.813 00:26:17.813 ' 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:17.813 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:26:17.813 00:26:17.813 real 0m0.168s 00:26:17.813 user 0m0.096s 00:26:17.813 sys 0m0.079s 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:26:17.813 ************************************ 00:26:17.813 END TEST dma 00:26:17.813 ************************************ 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:17.813 12:25:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:17.813 ************************************ 00:26:17.813 START TEST nvmf_identify 00:26:17.814 ************************************ 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:17.814 * Looking for test storage... 00:26:17.814 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:17.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.814 --rc genhtml_branch_coverage=1 00:26:17.814 --rc genhtml_function_coverage=1 00:26:17.814 --rc genhtml_legend=1 00:26:17.814 --rc geninfo_all_blocks=1 00:26:17.814 --rc geninfo_unexecuted_blocks=1 00:26:17.814 00:26:17.814 ' 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:17.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.814 --rc genhtml_branch_coverage=1 00:26:17.814 --rc genhtml_function_coverage=1 00:26:17.814 --rc genhtml_legend=1 00:26:17.814 --rc geninfo_all_blocks=1 00:26:17.814 --rc geninfo_unexecuted_blocks=1 00:26:17.814 00:26:17.814 ' 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:17.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.814 --rc genhtml_branch_coverage=1 00:26:17.814 --rc genhtml_function_coverage=1 00:26:17.814 --rc genhtml_legend=1 00:26:17.814 --rc geninfo_all_blocks=1 00:26:17.814 --rc geninfo_unexecuted_blocks=1 00:26:17.814 00:26:17.814 ' 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:17.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:17.814 --rc genhtml_branch_coverage=1 00:26:17.814 --rc genhtml_function_coverage=1 00:26:17.814 --rc genhtml_legend=1 00:26:17.814 --rc geninfo_all_blocks=1 00:26:17.814 --rc geninfo_unexecuted_blocks=1 00:26:17.814 00:26:17.814 ' 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:17.814 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:17.814 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:17.815 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:17.815 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:17.815 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:17.815 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:17.815 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:17.815 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:17.815 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:17.815 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:17.815 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:17.815 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:17.815 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:17.815 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:17.815 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:17.815 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:17.815 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:17.815 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:17.815 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:17.815 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:17.815 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:17.815 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:17.815 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:17.815 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:17.815 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:17.815 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:17.815 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:17.815 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:17.815 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:17.815 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:17.815 Cannot find device "nvmf_init_br" 00:26:17.815 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:26:17.815 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:18.073 Cannot find device "nvmf_init_br2" 00:26:18.073 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:26:18.073 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:18.073 Cannot find device "nvmf_tgt_br" 00:26:18.073 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:26:18.073 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:18.073 Cannot find device "nvmf_tgt_br2" 00:26:18.073 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:26:18.073 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:18.073 Cannot find device "nvmf_init_br" 00:26:18.073 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:26:18.073 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:18.073 Cannot find device "nvmf_init_br2" 00:26:18.073 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:26:18.074 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:18.074 Cannot find device "nvmf_tgt_br" 00:26:18.074 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:26:18.074 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:18.074 Cannot find device "nvmf_tgt_br2" 00:26:18.074 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:26:18.074 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:18.074 Cannot find device "nvmf_br" 00:26:18.074 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:26:18.074 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:18.074 Cannot find device "nvmf_init_if" 00:26:18.074 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:26:18.074 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:18.074 Cannot find device "nvmf_init_if2" 00:26:18.074 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:26:18.074 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:18.074 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:18.074 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:26:18.074 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:18.074 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:18.074 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:26:18.074 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:18.074 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:18.074 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:18.074 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:18.074 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:18.074 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:18.074 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:18.074 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:18.074 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:18.074 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:18.074 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:18.074 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:18.074 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:18.074 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:18.074 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:18.074 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:18.074 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:18.074 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:18.074 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:18.074 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:18.074 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:18.074 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:18.074 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:18.074 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:18.074 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:18.074 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:18.074 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:18.074 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:18.074 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:18.074 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:18.074 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:18.074 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:18.074 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:18.074 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:18.074 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.113 ms 00:26:18.074 00:26:18.074 --- 10.0.0.3 ping statistics --- 00:26:18.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:18.074 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:26:18.074 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:18.074 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:18.074 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.038 ms 00:26:18.074 00:26:18.074 --- 10.0.0.4 ping statistics --- 00:26:18.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:18.074 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:26:18.074 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:18.332 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:18.332 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:26:18.332 00:26:18.332 --- 10.0.0.1 ping statistics --- 00:26:18.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:18.332 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:26:18.332 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:18.332 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:18.332 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.033 ms 00:26:18.332 00:26:18.332 --- 10.0.0.2 ping statistics --- 00:26:18.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:18.332 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:26:18.332 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:18.332 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:26:18.332 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:18.332 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:18.332 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:18.332 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:18.332 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:18.332 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:18.332 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:18.332 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:26:18.332 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:18.332 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:18.332 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=86293 00:26:18.332 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:18.332 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:18.332 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 86293 00:26:18.332 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 86293 ']' 00:26:18.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:18.332 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:18.332 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:18.332 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:18.332 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:18.332 12:25:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:18.332 [2024-11-25 12:25:14.022708] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:26:18.332 [2024-11-25 12:25:14.022778] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:18.591 [2024-11-25 12:25:14.225985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:18.591 [2024-11-25 12:25:14.284044] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:18.591 [2024-11-25 12:25:14.284094] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:18.591 [2024-11-25 12:25:14.284106] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:18.591 [2024-11-25 12:25:14.284115] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:18.591 [2024-11-25 12:25:14.284124] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:18.591 [2024-11-25 12:25:14.285614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:18.591 [2024-11-25 12:25:14.285676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:18.591 [2024-11-25 12:25:14.285841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:18.591 [2024-11-25 12:25:14.285846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:19.157 12:25:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:19.157 12:25:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:26:19.157 12:25:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:19.157 12:25:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.157 12:25:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:19.157 [2024-11-25 12:25:14.907816] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:19.157 12:25:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.158 12:25:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:26:19.158 12:25:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:19.158 12:25:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:19.158 12:25:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:19.158 12:25:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.158 12:25:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:19.158 Malloc0 00:26:19.158 12:25:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.158 12:25:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:19.158 12:25:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.158 12:25:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:19.158 12:25:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.158 12:25:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:26:19.158 12:25:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.158 12:25:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:19.158 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.158 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:19.158 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.158 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:19.158 [2024-11-25 12:25:15.007955] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:19.158 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.158 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:26:19.158 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.158 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:19.158 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.158 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:26:19.158 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.158 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:19.158 [ 00:26:19.158 { 00:26:19.158 "allow_any_host": true, 00:26:19.158 "hosts": [], 00:26:19.158 "listen_addresses": [ 00:26:19.158 { 00:26:19.158 "adrfam": "IPv4", 00:26:19.158 "traddr": "10.0.0.3", 00:26:19.158 "trsvcid": "4420", 00:26:19.158 "trtype": "TCP" 00:26:19.158 } 00:26:19.158 ], 00:26:19.158 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:19.158 "subtype": "Discovery" 00:26:19.158 }, 00:26:19.158 { 00:26:19.158 "allow_any_host": true, 00:26:19.158 "hosts": [], 00:26:19.158 "listen_addresses": [ 00:26:19.158 { 00:26:19.158 "adrfam": "IPv4", 00:26:19.158 "traddr": "10.0.0.3", 00:26:19.158 "trsvcid": "4420", 00:26:19.158 "trtype": "TCP" 00:26:19.158 } 00:26:19.158 ], 00:26:19.158 "max_cntlid": 65519, 00:26:19.158 "max_namespaces": 32, 00:26:19.158 "min_cntlid": 1, 00:26:19.158 "model_number": "SPDK bdev Controller", 00:26:19.158 "namespaces": [ 00:26:19.158 { 00:26:19.158 "bdev_name": "Malloc0", 00:26:19.158 "eui64": "ABCDEF0123456789", 00:26:19.158 "name": "Malloc0", 00:26:19.158 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:26:19.158 "nsid": 1, 00:26:19.158 "uuid": "5d43fb33-1199-4e75-9451-daac97fcc798" 00:26:19.158 } 00:26:19.158 ], 00:26:19.158 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:19.158 "serial_number": "SPDK00000000000001", 00:26:19.158 "subtype": "NVMe" 00:26:19.158 } 00:26:19.158 ] 00:26:19.158 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.158 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:26:19.418 [2024-11-25 12:25:15.052305] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:26:19.418 [2024-11-25 12:25:15.052431] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86346 ] 00:26:19.418 [2024-11-25 12:25:15.211249] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:26:19.418 [2024-11-25 12:25:15.211301] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:26:19.418 [2024-11-25 12:25:15.211305] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:26:19.418 [2024-11-25 12:25:15.211317] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:26:19.418 [2024-11-25 12:25:15.211326] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:26:19.418 [2024-11-25 12:25:15.211565] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:26:19.418 [2024-11-25 12:25:15.211601] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1663d90 0 00:26:19.418 [2024-11-25 12:25:15.217764] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:26:19.418 [2024-11-25 12:25:15.217778] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:26:19.418 [2024-11-25 12:25:15.217781] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:26:19.418 [2024-11-25 12:25:15.217784] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:26:19.418 [2024-11-25 12:25:15.217806] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.418 [2024-11-25 12:25:15.217811] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.418 [2024-11-25 12:25:15.217814] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1663d90) 00:26:19.418 [2024-11-25 12:25:15.217826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:19.418 [2024-11-25 12:25:15.217847] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4600, cid 0, qid 0 00:26:19.418 [2024-11-25 12:25:15.225763] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.418 [2024-11-25 12:25:15.225777] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.418 [2024-11-25 12:25:15.225780] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.418 [2024-11-25 12:25:15.225784] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4600) on tqpair=0x1663d90 00:26:19.418 [2024-11-25 12:25:15.225793] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:19.418 [2024-11-25 12:25:15.225799] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:26:19.418 [2024-11-25 12:25:15.225802] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:26:19.418 [2024-11-25 12:25:15.225813] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.418 [2024-11-25 12:25:15.225815] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.418 [2024-11-25 12:25:15.225818] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1663d90) 00:26:19.418 [2024-11-25 12:25:15.225824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.418 [2024-11-25 12:25:15.225842] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4600, cid 0, qid 0 00:26:19.418 [2024-11-25 12:25:15.225894] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.418 [2024-11-25 12:25:15.225899] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.418 [2024-11-25 12:25:15.225902] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.418 [2024-11-25 12:25:15.225905] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4600) on tqpair=0x1663d90 00:26:19.418 [2024-11-25 12:25:15.225908] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:26:19.418 [2024-11-25 12:25:15.225913] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:26:19.418 [2024-11-25 12:25:15.225919] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.418 [2024-11-25 12:25:15.225921] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.418 [2024-11-25 12:25:15.225924] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1663d90) 00:26:19.418 [2024-11-25 12:25:15.225929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.418 [2024-11-25 12:25:15.225940] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4600, cid 0, qid 0 00:26:19.418 [2024-11-25 12:25:15.225982] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.418 [2024-11-25 12:25:15.225987] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.418 [2024-11-25 12:25:15.225990] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.418 [2024-11-25 12:25:15.225992] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4600) on tqpair=0x1663d90 00:26:19.418 [2024-11-25 12:25:15.225996] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:26:19.418 [2024-11-25 12:25:15.226002] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:26:19.418 [2024-11-25 12:25:15.226007] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.418 [2024-11-25 12:25:15.226009] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.418 [2024-11-25 12:25:15.226011] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1663d90) 00:26:19.418 [2024-11-25 12:25:15.226017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.419 [2024-11-25 12:25:15.226027] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4600, cid 0, qid 0 00:26:19.419 [2024-11-25 12:25:15.226072] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.419 [2024-11-25 12:25:15.226077] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.419 [2024-11-25 12:25:15.226079] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.419 [2024-11-25 12:25:15.226082] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4600) on tqpair=0x1663d90 00:26:19.419 [2024-11-25 12:25:15.226086] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:19.419 [2024-11-25 12:25:15.226092] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.419 [2024-11-25 12:25:15.226095] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.419 [2024-11-25 12:25:15.226097] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1663d90) 00:26:19.419 [2024-11-25 12:25:15.226103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.419 [2024-11-25 12:25:15.226113] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4600, cid 0, qid 0 00:26:19.419 [2024-11-25 12:25:15.226160] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.419 [2024-11-25 12:25:15.226165] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.419 [2024-11-25 12:25:15.226167] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.419 [2024-11-25 12:25:15.226169] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4600) on tqpair=0x1663d90 00:26:19.419 [2024-11-25 12:25:15.226173] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:26:19.419 [2024-11-25 12:25:15.226176] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:26:19.419 [2024-11-25 12:25:15.226182] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:19.419 [2024-11-25 12:25:15.226285] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:26:19.419 [2024-11-25 12:25:15.226289] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:19.419 [2024-11-25 12:25:15.226296] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.419 [2024-11-25 12:25:15.226298] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.419 [2024-11-25 12:25:15.226301] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1663d90) 00:26:19.419 [2024-11-25 12:25:15.226306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.419 [2024-11-25 12:25:15.226317] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4600, cid 0, qid 0 00:26:19.419 [2024-11-25 12:25:15.226365] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.419 [2024-11-25 12:25:15.226370] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.419 [2024-11-25 12:25:15.226372] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.419 [2024-11-25 12:25:15.226375] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4600) on tqpair=0x1663d90 00:26:19.419 [2024-11-25 12:25:15.226379] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:19.419 [2024-11-25 12:25:15.226385] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.419 [2024-11-25 12:25:15.226388] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.419 [2024-11-25 12:25:15.226390] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1663d90) 00:26:19.419 [2024-11-25 12:25:15.226396] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.419 [2024-11-25 12:25:15.226406] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4600, cid 0, qid 0 00:26:19.419 [2024-11-25 12:25:15.226452] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.419 [2024-11-25 12:25:15.226457] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.419 [2024-11-25 12:25:15.226459] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.419 [2024-11-25 12:25:15.226462] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4600) on tqpair=0x1663d90 00:26:19.419 [2024-11-25 12:25:15.226465] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:19.419 [2024-11-25 12:25:15.226469] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:26:19.419 [2024-11-25 12:25:15.226474] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:26:19.419 [2024-11-25 12:25:15.226481] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:26:19.419 [2024-11-25 12:25:15.226488] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.419 [2024-11-25 12:25:15.226491] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1663d90) 00:26:19.419 [2024-11-25 12:25:15.226496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.419 [2024-11-25 12:25:15.226507] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4600, cid 0, qid 0 00:26:19.419 [2024-11-25 12:25:15.226571] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:19.419 [2024-11-25 12:25:15.226576] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:19.419 [2024-11-25 12:25:15.226579] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:19.419 [2024-11-25 12:25:15.226581] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1663d90): datao=0, datal=4096, cccid=0 00:26:19.419 [2024-11-25 12:25:15.226585] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16a4600) on tqpair(0x1663d90): expected_datao=0, payload_size=4096 00:26:19.419 [2024-11-25 12:25:15.226588] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.419 [2024-11-25 12:25:15.226594] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:19.419 [2024-11-25 12:25:15.226597] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:19.419 [2024-11-25 12:25:15.226603] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.419 [2024-11-25 12:25:15.226608] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.419 [2024-11-25 12:25:15.226610] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.419 [2024-11-25 12:25:15.226613] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4600) on tqpair=0x1663d90 00:26:19.419 [2024-11-25 12:25:15.226618] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:26:19.419 [2024-11-25 12:25:15.226622] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:26:19.419 [2024-11-25 12:25:15.226624] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:26:19.419 [2024-11-25 12:25:15.226628] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:26:19.419 [2024-11-25 12:25:15.226631] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:26:19.419 [2024-11-25 12:25:15.226634] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:26:19.419 [2024-11-25 12:25:15.226643] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:26:19.419 [2024-11-25 12:25:15.226647] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.419 [2024-11-25 12:25:15.226650] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.419 [2024-11-25 12:25:15.226653] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1663d90) 00:26:19.419 [2024-11-25 12:25:15.226658] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:19.419 [2024-11-25 12:25:15.226670] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4600, cid 0, qid 0 00:26:19.419 [2024-11-25 12:25:15.226714] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.419 [2024-11-25 12:25:15.226719] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.419 [2024-11-25 12:25:15.226721] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.419 [2024-11-25 12:25:15.226724] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4600) on tqpair=0x1663d90 00:26:19.419 [2024-11-25 12:25:15.226730] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.419 [2024-11-25 12:25:15.226733] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.419 [2024-11-25 12:25:15.226735] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1663d90) 00:26:19.419 [2024-11-25 12:25:15.226740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:19.419 [2024-11-25 12:25:15.226744] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.419 [2024-11-25 12:25:15.226747] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.419 [2024-11-25 12:25:15.226759] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1663d90) 00:26:19.419 [2024-11-25 12:25:15.226764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:19.420 [2024-11-25 12:25:15.226769] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.420 [2024-11-25 12:25:15.226771] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.420 [2024-11-25 12:25:15.226774] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1663d90) 00:26:19.420 [2024-11-25 12:25:15.226778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:19.420 [2024-11-25 12:25:15.226783] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.420 [2024-11-25 12:25:15.226785] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.420 [2024-11-25 12:25:15.226787] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1663d90) 00:26:19.420 [2024-11-25 12:25:15.226792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:19.420 [2024-11-25 12:25:15.226795] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:26:19.420 [2024-11-25 12:25:15.226803] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:19.420 [2024-11-25 12:25:15.226809] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.420 [2024-11-25 12:25:15.226812] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1663d90) 00:26:19.420 [2024-11-25 12:25:15.226817] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.420 [2024-11-25 12:25:15.226830] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4600, cid 0, qid 0 00:26:19.420 [2024-11-25 12:25:15.226834] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4780, cid 1, qid 0 00:26:19.420 [2024-11-25 12:25:15.226838] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4900, cid 2, qid 0 00:26:19.420 [2024-11-25 12:25:15.226841] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4a80, cid 3, qid 0 00:26:19.420 [2024-11-25 12:25:15.226845] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4c00, cid 4, qid 0 00:26:19.420 [2024-11-25 12:25:15.226932] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.420 [2024-11-25 12:25:15.226936] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.420 [2024-11-25 12:25:15.226939] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.420 [2024-11-25 12:25:15.226941] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4c00) on tqpair=0x1663d90 00:26:19.420 [2024-11-25 12:25:15.226946] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:26:19.420 [2024-11-25 12:25:15.226949] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:26:19.420 [2024-11-25 12:25:15.226956] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.420 [2024-11-25 12:25:15.226959] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1663d90) 00:26:19.420 [2024-11-25 12:25:15.226964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.420 [2024-11-25 12:25:15.226975] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4c00, cid 4, qid 0 00:26:19.420 [2024-11-25 12:25:15.227023] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:19.420 [2024-11-25 12:25:15.227028] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:19.420 [2024-11-25 12:25:15.227030] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:19.420 [2024-11-25 12:25:15.227033] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1663d90): datao=0, datal=4096, cccid=4 00:26:19.420 [2024-11-25 12:25:15.227036] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16a4c00) on tqpair(0x1663d90): expected_datao=0, payload_size=4096 00:26:19.420 [2024-11-25 12:25:15.227038] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.420 [2024-11-25 12:25:15.227044] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:19.420 [2024-11-25 12:25:15.227046] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:19.420 [2024-11-25 12:25:15.227052] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.420 [2024-11-25 12:25:15.227057] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.420 [2024-11-25 12:25:15.227059] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.420 [2024-11-25 12:25:15.227062] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4c00) on tqpair=0x1663d90 00:26:19.420 [2024-11-25 12:25:15.227071] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:26:19.420 [2024-11-25 12:25:15.227090] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.420 [2024-11-25 12:25:15.227093] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1663d90) 00:26:19.420 [2024-11-25 12:25:15.227098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.420 [2024-11-25 12:25:15.227103] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.420 [2024-11-25 12:25:15.227106] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.420 [2024-11-25 12:25:15.227108] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1663d90) 00:26:19.420 [2024-11-25 12:25:15.227113] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:19.420 [2024-11-25 12:25:15.227130] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4c00, cid 4, qid 0 00:26:19.420 [2024-11-25 12:25:15.227134] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4d80, cid 5, qid 0 00:26:19.420 [2024-11-25 12:25:15.227222] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:19.420 [2024-11-25 12:25:15.227233] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:19.420 [2024-11-25 12:25:15.227236] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:19.420 [2024-11-25 12:25:15.227238] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1663d90): datao=0, datal=1024, cccid=4 00:26:19.420 [2024-11-25 12:25:15.227241] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16a4c00) on tqpair(0x1663d90): expected_datao=0, payload_size=1024 00:26:19.420 [2024-11-25 12:25:15.227244] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.420 [2024-11-25 12:25:15.227249] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:19.420 [2024-11-25 12:25:15.227252] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:19.420 [2024-11-25 12:25:15.227256] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.420 [2024-11-25 12:25:15.227261] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.420 [2024-11-25 12:25:15.227263] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.420 [2024-11-25 12:25:15.227266] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4d80) on tqpair=0x1663d90 00:26:19.420 [2024-11-25 12:25:15.268788] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.420 [2024-11-25 12:25:15.268803] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.420 [2024-11-25 12:25:15.268805] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.420 [2024-11-25 12:25:15.268808] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4c00) on tqpair=0x1663d90 00:26:19.420 [2024-11-25 12:25:15.268819] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.420 [2024-11-25 12:25:15.268821] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1663d90) 00:26:19.420 [2024-11-25 12:25:15.268827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.420 [2024-11-25 12:25:15.268846] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4c00, cid 4, qid 0 00:26:19.420 [2024-11-25 12:25:15.268898] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:19.420 [2024-11-25 12:25:15.268902] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:19.420 [2024-11-25 12:25:15.268905] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:19.420 [2024-11-25 12:25:15.268907] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1663d90): datao=0, datal=3072, cccid=4 00:26:19.420 [2024-11-25 12:25:15.268910] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16a4c00) on tqpair(0x1663d90): expected_datao=0, payload_size=3072 00:26:19.420 [2024-11-25 12:25:15.268914] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.420 [2024-11-25 12:25:15.268919] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:19.420 [2024-11-25 12:25:15.268921] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:19.420 [2024-11-25 12:25:15.268928] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.420 [2024-11-25 12:25:15.268932] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.420 [2024-11-25 12:25:15.268935] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.420 [2024-11-25 12:25:15.268937] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4c00) on tqpair=0x1663d90 00:26:19.420 [2024-11-25 12:25:15.268944] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.420 [2024-11-25 12:25:15.268946] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1663d90) 00:26:19.420 [2024-11-25 12:25:15.268952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.420 [2024-11-25 12:25:15.268966] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4c00, cid 4, qid 0 00:26:19.420 [2024-11-25 12:25:15.269018] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:19.420 [2024-11-25 12:25:15.269023] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:19.420 [2024-11-25 12:25:15.269025] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:19.420 [2024-11-25 12:25:15.269028] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1663d90): datao=0, datal=8, cccid=4 00:26:19.420 [2024-11-25 12:25:15.269031] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x16a4c00) on tqpair(0x1663d90): expected_datao=0, payload_size=8 00:26:19.420 [2024-11-25 12:25:15.269033] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.420 [2024-11-25 12:25:15.269038] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:19.420 [2024-11-25 12:25:15.269041] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:19.688 ===================================================== 00:26:19.688 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:19.688 ===================================================== 00:26:19.688 Controller Capabilities/Features 00:26:19.688 ================================ 00:26:19.688 Vendor ID: 0000 00:26:19.688 Subsystem Vendor ID: 0000 00:26:19.688 Serial Number: .................... 00:26:19.688 Model Number: ........................................ 00:26:19.688 Firmware Version: 25.01 00:26:19.688 Recommended Arb Burst: 0 00:26:19.688 IEEE OUI Identifier: 00 00 00 00:26:19.688 Multi-path I/O 00:26:19.688 May have multiple subsystem ports: No 00:26:19.688 May have multiple controllers: No 00:26:19.688 Associated with SR-IOV VF: No 00:26:19.688 Max Data Transfer Size: 131072 00:26:19.688 Max Number of Namespaces: 0 00:26:19.688 Max Number of I/O Queues: 1024 00:26:19.688 NVMe Specification Version (VS): 1.3 00:26:19.688 NVMe Specification Version (Identify): 1.3 00:26:19.688 Maximum Queue Entries: 128 00:26:19.688 Contiguous Queues Required: Yes 00:26:19.688 Arbitration Mechanisms Supported 00:26:19.688 Weighted Round Robin: Not Supported 00:26:19.688 Vendor Specific: Not Supported 00:26:19.688 Reset Timeout: 15000 ms 00:26:19.688 Doorbell Stride: 4 bytes 00:26:19.688 NVM Subsystem Reset: Not Supported 00:26:19.688 Command Sets Supported 00:26:19.688 NVM Command Set: Supported 00:26:19.688 Boot Partition: Not Supported 00:26:19.688 Memory Page Size Minimum: 4096 bytes 00:26:19.688 Memory Page Size Maximum: 4096 bytes 00:26:19.688 Persistent Memory Region: Not Supported 00:26:19.688 Optional Asynchronous Events Supported 00:26:19.688 Namespace Attribute Notices: Not Supported 00:26:19.688 Firmware Activation Notices: Not Supported 00:26:19.688 ANA Change Notices: Not Supported 00:26:19.688 PLE Aggregate Log Change Notices: Not Supported 00:26:19.688 LBA Status Info Alert Notices: Not Supported 00:26:19.688 EGE Aggregate Log Change Notices: Not Supported 00:26:19.688 Normal NVM Subsystem Shutdown event: Not Supported 00:26:19.688 Zone Descriptor Change Notices: Not Supported 00:26:19.688 Discovery Log Change Notices: Supported 00:26:19.688 Controller Attributes 00:26:19.688 128-bit Host Identifier: Not Supported 00:26:19.688 Non-Operational Permissive Mode: Not Supported 00:26:19.688 NVM Sets: Not Supported 00:26:19.688 Read Recovery Levels: Not Supported 00:26:19.688 Endurance Groups: Not Supported 00:26:19.688 Predictable Latency Mode: Not Supported 00:26:19.688 Traffic Based Keep ALive: Not Supported 00:26:19.688 Namespace Granularity: Not Supported 00:26:19.688 SQ Associations: Not Supported 00:26:19.688 UUID List: Not Supported 00:26:19.688 Multi-Domain Subsystem: Not Supported 00:26:19.688 Fixed Capacity Management: Not Supported 00:26:19.688 Variable Capacity Management: Not Supported 00:26:19.688 Delete Endurance Group: Not Supported 00:26:19.688 Delete NVM Set: Not Supported 00:26:19.688 Extended LBA Formats Supported: Not Supported 00:26:19.688 Flexible Data Placement Supported: Not Supported 00:26:19.688 00:26:19.688 Controller Memory Buffer Support 00:26:19.688 ================================ 00:26:19.688 Supported: No 00:26:19.688 00:26:19.688 Persistent Memory Region Support 00:26:19.688 ================================ 00:26:19.688 Supported: No 00:26:19.688 00:26:19.688 Admin Command Set Attributes 00:26:19.688 ============================ 00:26:19.688 Security Send/Receive: Not Supported 00:26:19.688 Format NVM: Not Supported 00:26:19.688 Firmware Activate/Download: Not Supported 00:26:19.688 Namespace Management: Not Supported 00:26:19.688 Device Self-Test: Not Supported 00:26:19.688 Directives: Not Supported 00:26:19.688 NVMe-MI: Not Supported 00:26:19.688 Virtualization Management: Not Supported 00:26:19.688 Doorbell Buffer Config: Not Supported 00:26:19.688 Get LBA Status Capability: Not Supported 00:26:19.688 Command & Feature Lockdown Capability: Not Supported 00:26:19.688 Abort Command Limit: 1 00:26:19.688 Async Event Request Limit: 4 00:26:19.688 Number of Firmware Slots: N/A 00:26:19.688 Firmware Slot 1 Read-Only: N/A 00:26:19.688 Firmware Activation Without Reset: N/A 00:26:19.688 Multiple Update Detection Support: N/A 00:26:19.688 Firmware Update Granularity: No Information Provided 00:26:19.688 Per-Namespace SMART Log: No 00:26:19.688 Asymmetric Namespace Access Log Page: Not Supported 00:26:19.688 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:19.688 Command Effects Log Page: Not Supported 00:26:19.688 Get Log Page Extended Data: Supported 00:26:19.688 Telemetry Log Pages: Not Supported 00:26:19.688 Persistent Event Log Pages: Not Supported 00:26:19.688 Supported Log Pages Log Page: May Support 00:26:19.688 Commands Supported & Effects Log Page: Not Supported 00:26:19.688 Feature Identifiers & Effects Log Page:May Support 00:26:19.688 NVMe-MI Commands & Effects Log Page: May Support 00:26:19.688 Data Area 4 for Telemetry Log: Not Supported 00:26:19.688 Error Log Page Entries Supported: 128 00:26:19.688 Keep Alive: Not Supported 00:26:19.688 00:26:19.688 NVM Command Set Attributes 00:26:19.688 ========================== 00:26:19.688 Submission Queue Entry Size 00:26:19.688 Max: 1 00:26:19.688 Min: 1 00:26:19.688 Completion Queue Entry Size 00:26:19.688 Max: 1 00:26:19.688 Min: 1 00:26:19.688 Number of Namespaces: 0 00:26:19.688 Compare Command: Not Supported 00:26:19.688 Write Uncorrectable Command: Not Supported 00:26:19.688 Dataset Management Command: Not Supported 00:26:19.688 Write Zeroes Command: Not Supported 00:26:19.688 Set Features Save Field: Not Supported 00:26:19.688 Reservations: Not Supported 00:26:19.688 Timestamp: Not Supported 00:26:19.688 Copy: Not Supported 00:26:19.688 Volatile Write Cache: Not Present 00:26:19.688 Atomic Write Unit (Normal): 1 00:26:19.688 Atomic Write Unit (PFail): 1 00:26:19.688 Atomic Compare & Write Unit: 1 00:26:19.689 Fused Compare & Write: Supported 00:26:19.689 Scatter-Gather List 00:26:19.689 SGL Command Set: Supported 00:26:19.689 SGL Keyed: Supported 00:26:19.689 SGL Bit Bucket Descriptor: Not Supported 00:26:19.689 SGL Metadata Pointer: Not Supported 00:26:19.689 Oversized SGL: Not Supported 00:26:19.689 SGL Metadata Address: Not Supported 00:26:19.689 SGL Offset: Supported 00:26:19.689 Transport SGL Data Block: Not Supported 00:26:19.689 Replay Protected Memory Block: Not Supported 00:26:19.689 00:26:19.689 Firmware Slot Information 00:26:19.689 ========================= 00:26:19.689 Active slot: 0 00:26:19.689 00:26:19.689 00:26:19.689 Error Log 00:26:19.689 ========= 00:26:19.689 00:26:19.689 Active Namespaces 00:26:19.689 ================= 00:26:19.689 Discovery Log Page 00:26:19.689 ================== 00:26:19.689 Generation Counter: 2 00:26:19.689 Number of Records: 2 00:26:19.689 Record Format: 0 00:26:19.689 00:26:19.689 Discovery Log Entry 0 00:26:19.689 ---------------------- 00:26:19.689 Transport Type: 3 (TCP) 00:26:19.689 Address Family: 1 (IPv4) 00:26:19.689 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:19.689 Entry Flags: 00:26:19.689 Duplicate Returned Information: 1 00:26:19.689 Explicit Persistent Connection Support for Discovery: 1 00:26:19.689 Transport Requirements: 00:26:19.689 Secure Channel: Not Required 00:26:19.689 Port ID: 0 (0x0000) 00:26:19.689 Controller ID: 65535 (0xffff) 00:26:19.689 Admin Max SQ Size: 128 00:26:19.689 Transport Service Identifier: 4420 00:26:19.689 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:19.689 Transport Address: 10.0.0.3 00:26:19.689 Discovery Log Entry 1 00:26:19.689 ---------------------- 00:26:19.689 Transport Type: 3 (TCP) 00:26:19.689 Address Family: 1 (IPv4) 00:26:19.689 Subsystem Type: 2 (NVM Subsystem) 00:26:19.689 Entry Flags: 00:26:19.689 Duplicate Returned Information: 0 00:26:19.689 Explicit Persistent Connection Support for Discovery: 0 00:26:19.689 Transport Requirements: 00:26:19.689 Secure Channel: Not Required 00:26:19.689 Port ID: 0 (0x0000) 00:26:19.689 Controller ID: 65535 (0xffff) 00:26:19.689 Admin Max SQ Size: 128 00:26:19.689 Transport Service Identifier: 4420 00:26:19.689 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:26:19.689 Transport Address: 10.0.0.3 [2024-11-25 12:25:15.313769] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.689 [2024-11-25 12:25:15.313785] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.689 [2024-11-25 12:25:15.313788] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.689 [2024-11-25 12:25:15.313791] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4c00) on tqpair=0x1663d90 00:26:19.689 [2024-11-25 12:25:15.313874] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:26:19.689 [2024-11-25 12:25:15.313882] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4600) on tqpair=0x1663d90 00:26:19.689 [2024-11-25 12:25:15.313888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.689 [2024-11-25 12:25:15.313892] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4780) on tqpair=0x1663d90 00:26:19.689 [2024-11-25 12:25:15.313895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.689 [2024-11-25 12:25:15.313899] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4900) on tqpair=0x1663d90 00:26:19.689 [2024-11-25 12:25:15.313902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.689 [2024-11-25 12:25:15.313906] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4a80) on tqpair=0x1663d90 00:26:19.689 [2024-11-25 12:25:15.313909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.689 [2024-11-25 12:25:15.313916] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.689 [2024-11-25 12:25:15.313919] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.689 [2024-11-25 12:25:15.313922] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1663d90) 00:26:19.689 [2024-11-25 12:25:15.313928] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.689 [2024-11-25 12:25:15.313944] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4a80, cid 3, qid 0 00:26:19.689 [2024-11-25 12:25:15.313988] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.689 [2024-11-25 12:25:15.313994] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.689 [2024-11-25 12:25:15.313996] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.689 [2024-11-25 12:25:15.313999] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4a80) on tqpair=0x1663d90 00:26:19.689 [2024-11-25 12:25:15.314004] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.689 [2024-11-25 12:25:15.314007] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.689 [2024-11-25 12:25:15.314009] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1663d90) 00:26:19.689 [2024-11-25 12:25:15.314015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.689 [2024-11-25 12:25:15.314028] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4a80, cid 3, qid 0 00:26:19.689 [2024-11-25 12:25:15.314079] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.689 [2024-11-25 12:25:15.314084] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.689 [2024-11-25 12:25:15.314086] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.689 [2024-11-25 12:25:15.314089] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4a80) on tqpair=0x1663d90 00:26:19.689 [2024-11-25 12:25:15.314092] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:26:19.689 [2024-11-25 12:25:15.314095] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:26:19.689 [2024-11-25 12:25:15.314105] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.689 [2024-11-25 12:25:15.314107] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.689 [2024-11-25 12:25:15.314110] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1663d90) 00:26:19.689 [2024-11-25 12:25:15.314115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.689 [2024-11-25 12:25:15.314126] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4a80, cid 3, qid 0 00:26:19.689 [2024-11-25 12:25:15.314177] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.689 [2024-11-25 12:25:15.314182] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.689 [2024-11-25 12:25:15.314184] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.689 [2024-11-25 12:25:15.314187] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4a80) on tqpair=0x1663d90 00:26:19.689 [2024-11-25 12:25:15.314194] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.689 [2024-11-25 12:25:15.314197] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.689 [2024-11-25 12:25:15.314199] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1663d90) 00:26:19.689 [2024-11-25 12:25:15.314205] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.689 [2024-11-25 12:25:15.314215] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4a80, cid 3, qid 0 00:26:19.689 [2024-11-25 12:25:15.314263] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.689 [2024-11-25 12:25:15.314267] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.689 [2024-11-25 12:25:15.314270] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.690 [2024-11-25 12:25:15.314272] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4a80) on tqpair=0x1663d90 00:26:19.690 [2024-11-25 12:25:15.314289] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.690 [2024-11-25 12:25:15.314292] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.690 [2024-11-25 12:25:15.314295] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1663d90) 00:26:19.690 [2024-11-25 12:25:15.314300] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.690 [2024-11-25 12:25:15.314310] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4a80, cid 3, qid 0 00:26:19.690 [2024-11-25 12:25:15.314351] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.690 [2024-11-25 12:25:15.314356] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.690 [2024-11-25 12:25:15.314358] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.690 [2024-11-25 12:25:15.314361] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4a80) on tqpair=0x1663d90 00:26:19.690 [2024-11-25 12:25:15.314368] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.690 [2024-11-25 12:25:15.314371] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.690 [2024-11-25 12:25:15.314373] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1663d90) 00:26:19.690 [2024-11-25 12:25:15.314379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.690 [2024-11-25 12:25:15.314389] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4a80, cid 3, qid 0 00:26:19.690 [2024-11-25 12:25:15.314434] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.690 [2024-11-25 12:25:15.314439] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.690 [2024-11-25 12:25:15.314442] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.690 [2024-11-25 12:25:15.314444] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4a80) on tqpair=0x1663d90 00:26:19.690 [2024-11-25 12:25:15.314452] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.690 [2024-11-25 12:25:15.314454] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.690 [2024-11-25 12:25:15.314457] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1663d90) 00:26:19.690 [2024-11-25 12:25:15.314462] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.690 [2024-11-25 12:25:15.314472] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4a80, cid 3, qid 0 00:26:19.690 [2024-11-25 12:25:15.314516] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.690 [2024-11-25 12:25:15.314520] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.690 [2024-11-25 12:25:15.314523] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.690 [2024-11-25 12:25:15.314525] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4a80) on tqpair=0x1663d90 00:26:19.690 [2024-11-25 12:25:15.314533] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.690 [2024-11-25 12:25:15.314535] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.690 [2024-11-25 12:25:15.314538] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1663d90) 00:26:19.690 [2024-11-25 12:25:15.314543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.690 [2024-11-25 12:25:15.314553] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4a80, cid 3, qid 0 00:26:19.690 [2024-11-25 12:25:15.314594] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.690 [2024-11-25 12:25:15.314598] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.690 [2024-11-25 12:25:15.314601] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.690 [2024-11-25 12:25:15.314604] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4a80) on tqpair=0x1663d90 00:26:19.690 [2024-11-25 12:25:15.314611] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.690 [2024-11-25 12:25:15.314614] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.690 [2024-11-25 12:25:15.314616] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1663d90) 00:26:19.690 [2024-11-25 12:25:15.314622] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.690 [2024-11-25 12:25:15.314632] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4a80, cid 3, qid 0 00:26:19.690 [2024-11-25 12:25:15.314675] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.690 [2024-11-25 12:25:15.314680] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.690 [2024-11-25 12:25:15.314682] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.690 [2024-11-25 12:25:15.314685] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4a80) on tqpair=0x1663d90 00:26:19.690 [2024-11-25 12:25:15.314692] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.690 [2024-11-25 12:25:15.314695] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.690 [2024-11-25 12:25:15.314697] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1663d90) 00:26:19.690 [2024-11-25 12:25:15.314703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.690 [2024-11-25 12:25:15.314713] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4a80, cid 3, qid 0 00:26:19.690 [2024-11-25 12:25:15.314764] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.690 [2024-11-25 12:25:15.314769] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.690 [2024-11-25 12:25:15.314771] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.690 [2024-11-25 12:25:15.314774] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4a80) on tqpair=0x1663d90 00:26:19.690 [2024-11-25 12:25:15.314781] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.690 [2024-11-25 12:25:15.314784] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.690 [2024-11-25 12:25:15.314787] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1663d90) 00:26:19.690 [2024-11-25 12:25:15.314792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.690 [2024-11-25 12:25:15.314803] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4a80, cid 3, qid 0 00:26:19.690 [2024-11-25 12:25:15.314840] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.690 [2024-11-25 12:25:15.314845] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.690 [2024-11-25 12:25:15.314847] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.690 [2024-11-25 12:25:15.314850] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4a80) on tqpair=0x1663d90 00:26:19.690 [2024-11-25 12:25:15.314857] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.690 [2024-11-25 12:25:15.314860] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.690 [2024-11-25 12:25:15.314862] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1663d90) 00:26:19.690 [2024-11-25 12:25:15.314867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.690 [2024-11-25 12:25:15.314878] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4a80, cid 3, qid 0 00:26:19.690 [2024-11-25 12:25:15.314921] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.690 [2024-11-25 12:25:15.314926] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.690 [2024-11-25 12:25:15.314928] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.690 [2024-11-25 12:25:15.314931] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4a80) on tqpair=0x1663d90 00:26:19.690 [2024-11-25 12:25:15.314938] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.690 [2024-11-25 12:25:15.314941] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.690 [2024-11-25 12:25:15.314943] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1663d90) 00:26:19.690 [2024-11-25 12:25:15.314948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.690 [2024-11-25 12:25:15.314958] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4a80, cid 3, qid 0 00:26:19.690 [2024-11-25 12:25:15.314998] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.690 [2024-11-25 12:25:15.315002] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.690 [2024-11-25 12:25:15.315005] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.691 [2024-11-25 12:25:15.315007] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4a80) on tqpair=0x1663d90 00:26:19.691 [2024-11-25 12:25:15.315015] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.691 [2024-11-25 12:25:15.315017] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.691 [2024-11-25 12:25:15.315020] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1663d90) 00:26:19.691 [2024-11-25 12:25:15.315025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.691 [2024-11-25 12:25:15.315035] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4a80, cid 3, qid 0 00:26:19.691 [2024-11-25 12:25:15.315076] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.691 [2024-11-25 12:25:15.315082] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.691 [2024-11-25 12:25:15.315085] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.691 [2024-11-25 12:25:15.315087] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4a80) on tqpair=0x1663d90 00:26:19.691 [2024-11-25 12:25:15.315095] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.691 [2024-11-25 12:25:15.315097] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.691 [2024-11-25 12:25:15.315100] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1663d90) 00:26:19.691 [2024-11-25 12:25:15.315105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.691 [2024-11-25 12:25:15.315115] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4a80, cid 3, qid 0 00:26:19.691 [2024-11-25 12:25:15.315157] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.691 [2024-11-25 12:25:15.315167] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.691 [2024-11-25 12:25:15.315169] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.691 [2024-11-25 12:25:15.315172] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4a80) on tqpair=0x1663d90 00:26:19.691 [2024-11-25 12:25:15.315179] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.691 [2024-11-25 12:25:15.315182] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.691 [2024-11-25 12:25:15.315185] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1663d90) 00:26:19.691 [2024-11-25 12:25:15.315190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.691 [2024-11-25 12:25:15.315200] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4a80, cid 3, qid 0 00:26:19.691 [2024-11-25 12:25:15.315239] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.691 [2024-11-25 12:25:15.315244] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.691 [2024-11-25 12:25:15.315246] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.691 [2024-11-25 12:25:15.315249] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4a80) on tqpair=0x1663d90 00:26:19.691 [2024-11-25 12:25:15.315256] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.691 [2024-11-25 12:25:15.315259] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.691 [2024-11-25 12:25:15.315261] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1663d90) 00:26:19.691 [2024-11-25 12:25:15.315266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.691 [2024-11-25 12:25:15.315277] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4a80, cid 3, qid 0 00:26:19.691 [2024-11-25 12:25:15.315320] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.691 [2024-11-25 12:25:15.315329] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.691 [2024-11-25 12:25:15.315331] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.691 [2024-11-25 12:25:15.315334] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4a80) on tqpair=0x1663d90 00:26:19.691 [2024-11-25 12:25:15.315341] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.691 [2024-11-25 12:25:15.315344] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.691 [2024-11-25 12:25:15.315347] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1663d90) 00:26:19.691 [2024-11-25 12:25:15.315352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.691 [2024-11-25 12:25:15.315363] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4a80, cid 3, qid 0 00:26:19.691 [2024-11-25 12:25:15.315401] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.691 [2024-11-25 12:25:15.315406] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.691 [2024-11-25 12:25:15.315408] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.691 [2024-11-25 12:25:15.315411] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4a80) on tqpair=0x1663d90 00:26:19.691 [2024-11-25 12:25:15.315418] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.691 [2024-11-25 12:25:15.315421] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.691 [2024-11-25 12:25:15.315423] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1663d90) 00:26:19.691 [2024-11-25 12:25:15.315429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.691 [2024-11-25 12:25:15.315439] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4a80, cid 3, qid 0 00:26:19.691 [2024-11-25 12:25:15.315478] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.691 [2024-11-25 12:25:15.315484] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.691 [2024-11-25 12:25:15.315486] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.691 [2024-11-25 12:25:15.315489] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4a80) on tqpair=0x1663d90 00:26:19.691 [2024-11-25 12:25:15.315496] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.691 [2024-11-25 12:25:15.315499] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.691 [2024-11-25 12:25:15.315502] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1663d90) 00:26:19.691 [2024-11-25 12:25:15.315507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.691 [2024-11-25 12:25:15.315517] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4a80, cid 3, qid 0 00:26:19.691 [2024-11-25 12:25:15.315556] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.691 [2024-11-25 12:25:15.315561] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.691 [2024-11-25 12:25:15.315563] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.691 [2024-11-25 12:25:15.315566] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4a80) on tqpair=0x1663d90 00:26:19.691 [2024-11-25 12:25:15.315573] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.691 [2024-11-25 12:25:15.315576] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.691 [2024-11-25 12:25:15.315578] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1663d90) 00:26:19.691 [2024-11-25 12:25:15.315583] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.691 [2024-11-25 12:25:15.315594] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4a80, cid 3, qid 0 00:26:19.691 [2024-11-25 12:25:15.315633] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.691 [2024-11-25 12:25:15.315642] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.691 [2024-11-25 12:25:15.315644] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.691 [2024-11-25 12:25:15.315647] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4a80) on tqpair=0x1663d90 00:26:19.691 [2024-11-25 12:25:15.315654] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.691 [2024-11-25 12:25:15.315657] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.691 [2024-11-25 12:25:15.315660] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1663d90) 00:26:19.691 [2024-11-25 12:25:15.315665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.692 [2024-11-25 12:25:15.315675] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4a80, cid 3, qid 0 00:26:19.692 [2024-11-25 12:25:15.315714] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.692 [2024-11-25 12:25:15.315719] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.692 [2024-11-25 12:25:15.315721] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.692 [2024-11-25 12:25:15.315724] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4a80) on tqpair=0x1663d90 00:26:19.692 [2024-11-25 12:25:15.315731] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.692 [2024-11-25 12:25:15.315734] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.692 [2024-11-25 12:25:15.315736] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1663d90) 00:26:19.692 [2024-11-25 12:25:15.315742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.692 [2024-11-25 12:25:15.315760] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4a80, cid 3, qid 0 00:26:19.692 [2024-11-25 12:25:15.315802] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.692 [2024-11-25 12:25:15.315807] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.692 [2024-11-25 12:25:15.315809] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.692 [2024-11-25 12:25:15.315812] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4a80) on tqpair=0x1663d90 00:26:19.692 [2024-11-25 12:25:15.315819] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.692 [2024-11-25 12:25:15.315822] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.692 [2024-11-25 12:25:15.315824] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1663d90) 00:26:19.692 [2024-11-25 12:25:15.315829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.692 [2024-11-25 12:25:15.315840] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4a80, cid 3, qid 0 00:26:19.692 [2024-11-25 12:25:15.315881] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.692 [2024-11-25 12:25:15.315886] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.692 [2024-11-25 12:25:15.315888] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.692 [2024-11-25 12:25:15.315890] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4a80) on tqpair=0x1663d90 00:26:19.692 [2024-11-25 12:25:15.315898] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.692 [2024-11-25 12:25:15.315900] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.692 [2024-11-25 12:25:15.315903] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1663d90) 00:26:19.692 [2024-11-25 12:25:15.315908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.692 [2024-11-25 12:25:15.315918] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4a80, cid 3, qid 0 00:26:19.692 [2024-11-25 12:25:15.315960] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.692 [2024-11-25 12:25:15.315965] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.692 [2024-11-25 12:25:15.315967] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.692 [2024-11-25 12:25:15.315970] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4a80) on tqpair=0x1663d90 00:26:19.692 [2024-11-25 12:25:15.315977] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.692 [2024-11-25 12:25:15.315980] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.692 [2024-11-25 12:25:15.315982] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1663d90) 00:26:19.692 [2024-11-25 12:25:15.315988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.692 [2024-11-25 12:25:15.315998] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4a80, cid 3, qid 0 00:26:19.692 [2024-11-25 12:25:15.316037] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.692 [2024-11-25 12:25:15.316046] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.692 [2024-11-25 12:25:15.316048] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.692 [2024-11-25 12:25:15.316051] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4a80) on tqpair=0x1663d90 00:26:19.692 [2024-11-25 12:25:15.316058] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.692 [2024-11-25 12:25:15.316061] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.692 [2024-11-25 12:25:15.316064] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1663d90) 00:26:19.692 [2024-11-25 12:25:15.316070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.692 [2024-11-25 12:25:15.316080] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4a80, cid 3, qid 0 00:26:19.692 [2024-11-25 12:25:15.316124] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.692 [2024-11-25 12:25:15.316133] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.692 [2024-11-25 12:25:15.316135] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.692 [2024-11-25 12:25:15.316138] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4a80) on tqpair=0x1663d90 00:26:19.692 [2024-11-25 12:25:15.316145] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.692 [2024-11-25 12:25:15.316148] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.692 [2024-11-25 12:25:15.316151] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1663d90) 00:26:19.692 [2024-11-25 12:25:15.316156] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.692 [2024-11-25 12:25:15.316166] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4a80, cid 3, qid 0 00:26:19.692 [2024-11-25 12:25:15.316210] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.692 [2024-11-25 12:25:15.316215] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.692 [2024-11-25 12:25:15.316217] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.692 [2024-11-25 12:25:15.316219] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4a80) on tqpair=0x1663d90 00:26:19.692 [2024-11-25 12:25:15.316227] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.692 [2024-11-25 12:25:15.316230] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.692 [2024-11-25 12:25:15.316232] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1663d90) 00:26:19.692 [2024-11-25 12:25:15.316237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.692 [2024-11-25 12:25:15.316247] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4a80, cid 3, qid 0 00:26:19.692 [2024-11-25 12:25:15.316289] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.692 [2024-11-25 12:25:15.316294] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.692 [2024-11-25 12:25:15.316297] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.692 [2024-11-25 12:25:15.316299] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4a80) on tqpair=0x1663d90 00:26:19.692 [2024-11-25 12:25:15.316307] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.692 [2024-11-25 12:25:15.316310] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.692 [2024-11-25 12:25:15.316312] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1663d90) 00:26:19.692 [2024-11-25 12:25:15.316317] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.692 [2024-11-25 12:25:15.316328] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4a80, cid 3, qid 0 00:26:19.692 [2024-11-25 12:25:15.316364] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.692 [2024-11-25 12:25:15.316369] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.692 [2024-11-25 12:25:15.316371] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.692 [2024-11-25 12:25:15.316374] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4a80) on tqpair=0x1663d90 00:26:19.692 [2024-11-25 12:25:15.316381] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.692 [2024-11-25 12:25:15.316384] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.692 [2024-11-25 12:25:15.316386] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1663d90) 00:26:19.692 [2024-11-25 12:25:15.316391] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.693 [2024-11-25 12:25:15.316401] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4a80, cid 3, qid 0 00:26:19.693 [2024-11-25 12:25:15.316447] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.693 [2024-11-25 12:25:15.316456] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.693 [2024-11-25 12:25:15.316458] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.693 [2024-11-25 12:25:15.316461] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4a80) on tqpair=0x1663d90 00:26:19.693 [2024-11-25 12:25:15.316468] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.693 [2024-11-25 12:25:15.316471] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.693 [2024-11-25 12:25:15.316474] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1663d90) 00:26:19.693 [2024-11-25 12:25:15.316479] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.693 [2024-11-25 12:25:15.316489] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4a80, cid 3, qid 0 00:26:19.693 [2024-11-25 12:25:15.316533] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.693 [2024-11-25 12:25:15.316539] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.693 [2024-11-25 12:25:15.316541] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.693 [2024-11-25 12:25:15.316544] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4a80) on tqpair=0x1663d90 00:26:19.693 [2024-11-25 12:25:15.316551] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.693 [2024-11-25 12:25:15.316554] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.693 [2024-11-25 12:25:15.316556] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1663d90) 00:26:19.693 [2024-11-25 12:25:15.316562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.693 [2024-11-25 12:25:15.316572] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4a80, cid 3, qid 0 00:26:19.693 [2024-11-25 12:25:15.316615] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.693 [2024-11-25 12:25:15.316620] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.693 [2024-11-25 12:25:15.316622] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.693 [2024-11-25 12:25:15.316625] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4a80) on tqpair=0x1663d90 00:26:19.693 [2024-11-25 12:25:15.316632] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.693 [2024-11-25 12:25:15.316635] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.693 [2024-11-25 12:25:15.316637] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1663d90) 00:26:19.693 [2024-11-25 12:25:15.316643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.693 [2024-11-25 12:25:15.316653] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4a80, cid 3, qid 0 00:26:19.693 [2024-11-25 12:25:15.316695] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.693 [2024-11-25 12:25:15.316700] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.693 [2024-11-25 12:25:15.316702] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.693 [2024-11-25 12:25:15.316705] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4a80) on tqpair=0x1663d90 00:26:19.693 [2024-11-25 12:25:15.316712] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.693 [2024-11-25 12:25:15.316715] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.693 [2024-11-25 12:25:15.316717] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1663d90) 00:26:19.693 [2024-11-25 12:25:15.316722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.693 [2024-11-25 12:25:15.316732] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4a80, cid 3, qid 0 00:26:19.693 [2024-11-25 12:25:15.316782] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.693 [2024-11-25 12:25:15.316787] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.693 [2024-11-25 12:25:15.316790] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.693 [2024-11-25 12:25:15.316793] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4a80) on tqpair=0x1663d90 00:26:19.693 [2024-11-25 12:25:15.316800] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.693 [2024-11-25 12:25:15.316803] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.693 [2024-11-25 12:25:15.316805] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1663d90) 00:26:19.693 [2024-11-25 12:25:15.316810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.693 [2024-11-25 12:25:15.316822] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4a80, cid 3, qid 0 00:26:19.693 [2024-11-25 12:25:15.316863] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.693 [2024-11-25 12:25:15.316868] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.693 [2024-11-25 12:25:15.316870] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.693 [2024-11-25 12:25:15.316873] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4a80) on tqpair=0x1663d90 00:26:19.693 [2024-11-25 12:25:15.316880] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.693 [2024-11-25 12:25:15.316883] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.693 [2024-11-25 12:25:15.316885] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1663d90) 00:26:19.693 [2024-11-25 12:25:15.316891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.693 [2024-11-25 12:25:15.316901] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4a80, cid 3, qid 0 00:26:19.693 [2024-11-25 12:25:15.316939] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.693 [2024-11-25 12:25:15.316944] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.693 [2024-11-25 12:25:15.316946] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.693 [2024-11-25 12:25:15.316949] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4a80) on tqpair=0x1663d90 00:26:19.693 [2024-11-25 12:25:15.316956] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.693 [2024-11-25 12:25:15.316959] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.693 [2024-11-25 12:25:15.316961] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1663d90) 00:26:19.693 [2024-11-25 12:25:15.316966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.693 [2024-11-25 12:25:15.316976] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4a80, cid 3, qid 0 00:26:19.693 [2024-11-25 12:25:15.317015] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.693 [2024-11-25 12:25:15.317020] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.693 [2024-11-25 12:25:15.317022] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.693 [2024-11-25 12:25:15.317024] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4a80) on tqpair=0x1663d90 00:26:19.693 [2024-11-25 12:25:15.317032] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.693 [2024-11-25 12:25:15.317034] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.693 [2024-11-25 12:25:15.317037] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1663d90) 00:26:19.693 [2024-11-25 12:25:15.317042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.693 [2024-11-25 12:25:15.317052] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4a80, cid 3, qid 0 00:26:19.693 [2024-11-25 12:25:15.317097] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.693 [2024-11-25 12:25:15.317102] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.693 [2024-11-25 12:25:15.317104] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.693 [2024-11-25 12:25:15.317106] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4a80) on tqpair=0x1663d90 00:26:19.693 [2024-11-25 12:25:15.317114] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.693 [2024-11-25 12:25:15.317117] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.693 [2024-11-25 12:25:15.317119] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1663d90) 00:26:19.693 [2024-11-25 12:25:15.317124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.694 [2024-11-25 12:25:15.317134] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4a80, cid 3, qid 0 00:26:19.694 [2024-11-25 12:25:15.317176] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.694 [2024-11-25 12:25:15.317181] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.694 [2024-11-25 12:25:15.317183] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.694 [2024-11-25 12:25:15.317186] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4a80) on tqpair=0x1663d90 00:26:19.694 [2024-11-25 12:25:15.317194] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.694 [2024-11-25 12:25:15.317196] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.694 [2024-11-25 12:25:15.317198] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1663d90) 00:26:19.694 [2024-11-25 12:25:15.317204] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.694 [2024-11-25 12:25:15.317214] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4a80, cid 3, qid 0 00:26:19.694 [2024-11-25 12:25:15.317255] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.694 [2024-11-25 12:25:15.317260] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.694 [2024-11-25 12:25:15.317262] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.694 [2024-11-25 12:25:15.317265] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4a80) on tqpair=0x1663d90 00:26:19.694 [2024-11-25 12:25:15.317272] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.694 [2024-11-25 12:25:15.317275] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.694 [2024-11-25 12:25:15.317278] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1663d90) 00:26:19.694 [2024-11-25 12:25:15.317283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.694 [2024-11-25 12:25:15.317293] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4a80, cid 3, qid 0 00:26:19.694 [2024-11-25 12:25:15.317336] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.694 [2024-11-25 12:25:15.317342] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.694 [2024-11-25 12:25:15.317345] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.694 [2024-11-25 12:25:15.317347] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4a80) on tqpair=0x1663d90 00:26:19.694 [2024-11-25 12:25:15.317355] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.694 [2024-11-25 12:25:15.317358] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.694 [2024-11-25 12:25:15.317360] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1663d90) 00:26:19.694 [2024-11-25 12:25:15.317365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.694 [2024-11-25 12:25:15.317375] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4a80, cid 3, qid 0 00:26:19.694 [2024-11-25 12:25:15.317416] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.694 [2024-11-25 12:25:15.317425] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.694 [2024-11-25 12:25:15.317428] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.694 [2024-11-25 12:25:15.317430] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4a80) on tqpair=0x1663d90 00:26:19.694 [2024-11-25 12:25:15.317438] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.694 [2024-11-25 12:25:15.317441] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.694 [2024-11-25 12:25:15.317443] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1663d90) 00:26:19.694 [2024-11-25 12:25:15.317449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.694 [2024-11-25 12:25:15.317459] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4a80, cid 3, qid 0 00:26:19.694 [2024-11-25 12:25:15.317498] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.694 [2024-11-25 12:25:15.317506] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.694 [2024-11-25 12:25:15.317509] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.694 [2024-11-25 12:25:15.317512] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4a80) on tqpair=0x1663d90 00:26:19.694 [2024-11-25 12:25:15.317519] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.694 [2024-11-25 12:25:15.317522] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.694 [2024-11-25 12:25:15.317524] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1663d90) 00:26:19.694 [2024-11-25 12:25:15.317530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.694 [2024-11-25 12:25:15.317540] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4a80, cid 3, qid 0 00:26:19.694 [2024-11-25 12:25:15.317578] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.694 [2024-11-25 12:25:15.317583] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.694 [2024-11-25 12:25:15.317585] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.694 [2024-11-25 12:25:15.317588] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4a80) on tqpair=0x1663d90 00:26:19.694 [2024-11-25 12:25:15.317595] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.694 [2024-11-25 12:25:15.317598] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.694 [2024-11-25 12:25:15.317601] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1663d90) 00:26:19.694 [2024-11-25 12:25:15.317606] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.694 [2024-11-25 12:25:15.317616] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4a80, cid 3, qid 0 00:26:19.694 [2024-11-25 12:25:15.317653] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.694 [2024-11-25 12:25:15.317661] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.694 [2024-11-25 12:25:15.317664] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.694 [2024-11-25 12:25:15.317667] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4a80) on tqpair=0x1663d90 00:26:19.694 [2024-11-25 12:25:15.317674] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.694 [2024-11-25 12:25:15.317677] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.694 [2024-11-25 12:25:15.317680] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1663d90) 00:26:19.694 [2024-11-25 12:25:15.317685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.694 [2024-11-25 12:25:15.317695] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4a80, cid 3, qid 0 00:26:19.694 [2024-11-25 12:25:15.317739] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.695 [2024-11-25 12:25:15.317747] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.695 [2024-11-25 12:25:15.321764] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.695 [2024-11-25 12:25:15.321768] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4a80) on tqpair=0x1663d90 00:26:19.695 [2024-11-25 12:25:15.321776] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.695 [2024-11-25 12:25:15.321779] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.695 [2024-11-25 12:25:15.321781] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1663d90) 00:26:19.695 [2024-11-25 12:25:15.321787] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.695 [2024-11-25 12:25:15.321800] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x16a4a80, cid 3, qid 0 00:26:19.695 [2024-11-25 12:25:15.321844] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.695 [2024-11-25 12:25:15.321849] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.695 [2024-11-25 12:25:15.321851] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.695 [2024-11-25 12:25:15.321854] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x16a4a80) on tqpair=0x1663d90 00:26:19.695 [2024-11-25 12:25:15.321859] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:26:19.695 00:26:19.695 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:26:19.695 [2024-11-25 12:25:15.351120] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:26:19.695 [2024-11-25 12:25:15.351152] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86348 ] 00:26:19.695 [2024-11-25 12:25:15.503162] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:26:19.695 [2024-11-25 12:25:15.503209] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:26:19.695 [2024-11-25 12:25:15.503213] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:26:19.695 [2024-11-25 12:25:15.503223] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:26:19.695 [2024-11-25 12:25:15.503232] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:26:19.695 [2024-11-25 12:25:15.503452] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:26:19.695 [2024-11-25 12:25:15.503486] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x83cd90 0 00:26:19.695 [2024-11-25 12:25:15.510762] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:26:19.695 [2024-11-25 12:25:15.510779] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:26:19.695 [2024-11-25 12:25:15.510783] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:26:19.695 [2024-11-25 12:25:15.510786] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:26:19.695 [2024-11-25 12:25:15.510808] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.695 [2024-11-25 12:25:15.510812] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.695 [2024-11-25 12:25:15.510815] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x83cd90) 00:26:19.695 [2024-11-25 12:25:15.510825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:19.695 [2024-11-25 12:25:15.510847] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x87d600, cid 0, qid 0 00:26:19.695 [2024-11-25 12:25:15.518763] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.695 [2024-11-25 12:25:15.518776] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.695 [2024-11-25 12:25:15.518779] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.695 [2024-11-25 12:25:15.518783] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x87d600) on tqpair=0x83cd90 00:26:19.695 [2024-11-25 12:25:15.518791] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:19.695 [2024-11-25 12:25:15.518797] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:26:19.695 [2024-11-25 12:25:15.518801] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:26:19.695 [2024-11-25 12:25:15.518810] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.695 [2024-11-25 12:25:15.518813] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.695 [2024-11-25 12:25:15.518816] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x83cd90) 00:26:19.695 [2024-11-25 12:25:15.518823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.695 [2024-11-25 12:25:15.518840] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x87d600, cid 0, qid 0 00:26:19.695 [2024-11-25 12:25:15.518885] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.695 [2024-11-25 12:25:15.518890] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.695 [2024-11-25 12:25:15.518892] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.695 [2024-11-25 12:25:15.518895] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x87d600) on tqpair=0x83cd90 00:26:19.695 [2024-11-25 12:25:15.518900] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:26:19.695 [2024-11-25 12:25:15.518905] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:26:19.695 [2024-11-25 12:25:15.518910] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.695 [2024-11-25 12:25:15.518913] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.695 [2024-11-25 12:25:15.518916] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x83cd90) 00:26:19.695 [2024-11-25 12:25:15.518921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.695 [2024-11-25 12:25:15.518932] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x87d600, cid 0, qid 0 00:26:19.695 [2024-11-25 12:25:15.518978] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.695 [2024-11-25 12:25:15.518983] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.695 [2024-11-25 12:25:15.518986] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.695 [2024-11-25 12:25:15.518989] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x87d600) on tqpair=0x83cd90 00:26:19.695 [2024-11-25 12:25:15.518993] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:26:19.695 [2024-11-25 12:25:15.518998] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:26:19.695 [2024-11-25 12:25:15.519003] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.695 [2024-11-25 12:25:15.519005] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.695 [2024-11-25 12:25:15.519008] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x83cd90) 00:26:19.695 [2024-11-25 12:25:15.519013] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.695 [2024-11-25 12:25:15.519024] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x87d600, cid 0, qid 0 00:26:19.695 [2024-11-25 12:25:15.519070] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.695 [2024-11-25 12:25:15.519075] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.695 [2024-11-25 12:25:15.519077] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.695 [2024-11-25 12:25:15.519080] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x87d600) on tqpair=0x83cd90 00:26:19.695 [2024-11-25 12:25:15.519084] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:19.695 [2024-11-25 12:25:15.519090] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.695 [2024-11-25 12:25:15.519093] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.695 [2024-11-25 12:25:15.519095] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x83cd90) 00:26:19.695 [2024-11-25 12:25:15.519101] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.695 [2024-11-25 12:25:15.519111] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x87d600, cid 0, qid 0 00:26:19.695 [2024-11-25 12:25:15.519152] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.695 [2024-11-25 12:25:15.519156] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.695 [2024-11-25 12:25:15.519159] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.695 [2024-11-25 12:25:15.519161] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x87d600) on tqpair=0x83cd90 00:26:19.695 [2024-11-25 12:25:15.519165] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:26:19.695 [2024-11-25 12:25:15.519168] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:26:19.695 [2024-11-25 12:25:15.519173] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:19.695 [2024-11-25 12:25:15.519277] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:26:19.696 [2024-11-25 12:25:15.519280] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:19.696 [2024-11-25 12:25:15.519286] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.696 [2024-11-25 12:25:15.519289] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.696 [2024-11-25 12:25:15.519292] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x83cd90) 00:26:19.696 [2024-11-25 12:25:15.519297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.696 [2024-11-25 12:25:15.519307] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x87d600, cid 0, qid 0 00:26:19.696 [2024-11-25 12:25:15.519354] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.696 [2024-11-25 12:25:15.519358] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.696 [2024-11-25 12:25:15.519361] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.696 [2024-11-25 12:25:15.519363] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x87d600) on tqpair=0x83cd90 00:26:19.696 [2024-11-25 12:25:15.519367] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:19.696 [2024-11-25 12:25:15.519374] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.696 [2024-11-25 12:25:15.519376] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.696 [2024-11-25 12:25:15.519379] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x83cd90) 00:26:19.696 [2024-11-25 12:25:15.519384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.696 [2024-11-25 12:25:15.519395] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x87d600, cid 0, qid 0 00:26:19.696 [2024-11-25 12:25:15.519438] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.696 [2024-11-25 12:25:15.519444] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.696 [2024-11-25 12:25:15.519446] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.696 [2024-11-25 12:25:15.519449] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x87d600) on tqpair=0x83cd90 00:26:19.696 [2024-11-25 12:25:15.519452] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:19.696 [2024-11-25 12:25:15.519456] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:26:19.696 [2024-11-25 12:25:15.519461] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:26:19.696 [2024-11-25 12:25:15.519467] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:26:19.696 [2024-11-25 12:25:15.519475] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.696 [2024-11-25 12:25:15.519477] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x83cd90) 00:26:19.696 [2024-11-25 12:25:15.519483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.696 [2024-11-25 12:25:15.519494] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x87d600, cid 0, qid 0 00:26:19.696 [2024-11-25 12:25:15.519569] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:19.696 [2024-11-25 12:25:15.519573] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:19.696 [2024-11-25 12:25:15.519576] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:19.696 [2024-11-25 12:25:15.519579] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x83cd90): datao=0, datal=4096, cccid=0 00:26:19.696 [2024-11-25 12:25:15.519582] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x87d600) on tqpair(0x83cd90): expected_datao=0, payload_size=4096 00:26:19.696 [2024-11-25 12:25:15.519586] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.696 [2024-11-25 12:25:15.519592] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:19.696 [2024-11-25 12:25:15.519595] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:19.696 [2024-11-25 12:25:15.519602] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.696 [2024-11-25 12:25:15.519606] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.696 [2024-11-25 12:25:15.519609] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.696 [2024-11-25 12:25:15.519612] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x87d600) on tqpair=0x83cd90 00:26:19.696 [2024-11-25 12:25:15.519617] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:26:19.696 [2024-11-25 12:25:15.519621] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:26:19.696 [2024-11-25 12:25:15.519624] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:26:19.696 [2024-11-25 12:25:15.519627] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:26:19.696 [2024-11-25 12:25:15.519630] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:26:19.696 [2024-11-25 12:25:15.519633] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:26:19.696 [2024-11-25 12:25:15.519641] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:26:19.696 [2024-11-25 12:25:15.519646] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.696 [2024-11-25 12:25:15.519649] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.696 [2024-11-25 12:25:15.519651] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x83cd90) 00:26:19.696 [2024-11-25 12:25:15.519657] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:19.696 [2024-11-25 12:25:15.519669] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x87d600, cid 0, qid 0 00:26:19.696 [2024-11-25 12:25:15.519712] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.696 [2024-11-25 12:25:15.519717] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.696 [2024-11-25 12:25:15.519719] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.696 [2024-11-25 12:25:15.519722] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x87d600) on tqpair=0x83cd90 00:26:19.696 [2024-11-25 12:25:15.519729] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.696 [2024-11-25 12:25:15.519731] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.696 [2024-11-25 12:25:15.519734] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x83cd90) 00:26:19.696 [2024-11-25 12:25:15.519738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:19.696 [2024-11-25 12:25:15.519743] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.696 [2024-11-25 12:25:15.519746] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.696 [2024-11-25 12:25:15.519758] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x83cd90) 00:26:19.696 [2024-11-25 12:25:15.519764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:19.696 [2024-11-25 12:25:15.519768] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.696 [2024-11-25 12:25:15.519771] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.696 [2024-11-25 12:25:15.519773] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x83cd90) 00:26:19.696 [2024-11-25 12:25:15.519778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:19.696 [2024-11-25 12:25:15.519782] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.696 [2024-11-25 12:25:15.519785] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.696 [2024-11-25 12:25:15.519787] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x83cd90) 00:26:19.696 [2024-11-25 12:25:15.519792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:19.696 [2024-11-25 12:25:15.519795] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:26:19.696 [2024-11-25 12:25:15.519803] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:19.696 [2024-11-25 12:25:15.519807] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.696 [2024-11-25 12:25:15.519810] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x83cd90) 00:26:19.696 [2024-11-25 12:25:15.519815] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.696 [2024-11-25 12:25:15.519828] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x87d600, cid 0, qid 0 00:26:19.696 [2024-11-25 12:25:15.519833] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x87d780, cid 1, qid 0 00:26:19.696 [2024-11-25 12:25:15.519836] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x87d900, cid 2, qid 0 00:26:19.696 [2024-11-25 12:25:15.519840] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x87da80, cid 3, qid 0 00:26:19.696 [2024-11-25 12:25:15.519843] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x87dc00, cid 4, qid 0 00:26:19.696 [2024-11-25 12:25:15.519926] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.696 [2024-11-25 12:25:15.519931] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.697 [2024-11-25 12:25:15.519933] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.697 [2024-11-25 12:25:15.519936] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x87dc00) on tqpair=0x83cd90 00:26:19.697 [2024-11-25 12:25:15.519940] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:26:19.697 [2024-11-25 12:25:15.519943] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:26:19.697 [2024-11-25 12:25:15.519949] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:26:19.697 [2024-11-25 12:25:15.519956] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:26:19.697 [2024-11-25 12:25:15.519960] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.697 [2024-11-25 12:25:15.519963] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.697 [2024-11-25 12:25:15.519965] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x83cd90) 00:26:19.697 [2024-11-25 12:25:15.519971] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:19.697 [2024-11-25 12:25:15.519981] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x87dc00, cid 4, qid 0 00:26:19.697 [2024-11-25 12:25:15.520026] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.697 [2024-11-25 12:25:15.520031] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.697 [2024-11-25 12:25:15.520033] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.697 [2024-11-25 12:25:15.520036] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x87dc00) on tqpair=0x83cd90 00:26:19.697 [2024-11-25 12:25:15.520092] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:26:19.697 [2024-11-25 12:25:15.520106] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:26:19.697 [2024-11-25 12:25:15.520112] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.697 [2024-11-25 12:25:15.520115] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x83cd90) 00:26:19.697 [2024-11-25 12:25:15.520120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.697 [2024-11-25 12:25:15.520131] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x87dc00, cid 4, qid 0 00:26:19.697 [2024-11-25 12:25:15.520189] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:19.697 [2024-11-25 12:25:15.520198] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:19.697 [2024-11-25 12:25:15.520201] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:19.697 [2024-11-25 12:25:15.520204] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x83cd90): datao=0, datal=4096, cccid=4 00:26:19.697 [2024-11-25 12:25:15.520207] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x87dc00) on tqpair(0x83cd90): expected_datao=0, payload_size=4096 00:26:19.697 [2024-11-25 12:25:15.520210] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.697 [2024-11-25 12:25:15.520215] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:19.697 [2024-11-25 12:25:15.520218] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:19.697 [2024-11-25 12:25:15.520224] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.697 [2024-11-25 12:25:15.520228] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.697 [2024-11-25 12:25:15.520231] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.697 [2024-11-25 12:25:15.520234] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x87dc00) on tqpair=0x83cd90 00:26:19.697 [2024-11-25 12:25:15.520243] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:26:19.697 [2024-11-25 12:25:15.520250] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:26:19.697 [2024-11-25 12:25:15.520257] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:26:19.697 [2024-11-25 12:25:15.520262] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.697 [2024-11-25 12:25:15.520264] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x83cd90) 00:26:19.697 [2024-11-25 12:25:15.520270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.697 [2024-11-25 12:25:15.520281] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x87dc00, cid 4, qid 0 00:26:19.697 [2024-11-25 12:25:15.520340] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:19.697 [2024-11-25 12:25:15.520345] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:19.697 [2024-11-25 12:25:15.520347] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:19.697 [2024-11-25 12:25:15.520350] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x83cd90): datao=0, datal=4096, cccid=4 00:26:19.697 [2024-11-25 12:25:15.520352] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x87dc00) on tqpair(0x83cd90): expected_datao=0, payload_size=4096 00:26:19.697 [2024-11-25 12:25:15.520355] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.697 [2024-11-25 12:25:15.520360] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:19.697 [2024-11-25 12:25:15.520363] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:19.697 [2024-11-25 12:25:15.520369] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.697 [2024-11-25 12:25:15.520373] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.697 [2024-11-25 12:25:15.520375] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.697 [2024-11-25 12:25:15.520378] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x87dc00) on tqpair=0x83cd90 00:26:19.697 [2024-11-25 12:25:15.520390] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:26:19.697 [2024-11-25 12:25:15.520396] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:26:19.697 [2024-11-25 12:25:15.520401] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.697 [2024-11-25 12:25:15.520404] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x83cd90) 00:26:19.697 [2024-11-25 12:25:15.520409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.697 [2024-11-25 12:25:15.520420] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x87dc00, cid 4, qid 0 00:26:19.697 [2024-11-25 12:25:15.520468] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:19.697 [2024-11-25 12:25:15.520473] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:19.697 [2024-11-25 12:25:15.520475] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:19.697 [2024-11-25 12:25:15.520478] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x83cd90): datao=0, datal=4096, cccid=4 00:26:19.697 [2024-11-25 12:25:15.520481] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x87dc00) on tqpair(0x83cd90): expected_datao=0, payload_size=4096 00:26:19.697 [2024-11-25 12:25:15.520484] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.697 [2024-11-25 12:25:15.520489] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:19.697 [2024-11-25 12:25:15.520491] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:19.697 [2024-11-25 12:25:15.520497] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.697 [2024-11-25 12:25:15.520502] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.697 [2024-11-25 12:25:15.520504] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.697 [2024-11-25 12:25:15.520507] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x87dc00) on tqpair=0x83cd90 00:26:19.697 [2024-11-25 12:25:15.520513] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:26:19.697 [2024-11-25 12:25:15.520518] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:26:19.697 [2024-11-25 12:25:15.520525] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:26:19.697 [2024-11-25 12:25:15.520529] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:26:19.697 [2024-11-25 12:25:15.520532] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:26:19.697 [2024-11-25 12:25:15.520536] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:26:19.697 [2024-11-25 12:25:15.520540] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:26:19.698 [2024-11-25 12:25:15.520543] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:26:19.698 [2024-11-25 12:25:15.520546] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:26:19.698 [2024-11-25 12:25:15.520558] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.698 [2024-11-25 12:25:15.520561] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x83cd90) 00:26:19.698 [2024-11-25 12:25:15.520566] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.698 [2024-11-25 12:25:15.520571] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.698 [2024-11-25 12:25:15.520574] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.698 [2024-11-25 12:25:15.520576] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x83cd90) 00:26:19.698 [2024-11-25 12:25:15.520582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:19.698 [2024-11-25 12:25:15.520596] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x87dc00, cid 4, qid 0 00:26:19.698 [2024-11-25 12:25:15.520600] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x87dd80, cid 5, qid 0 00:26:19.698 [2024-11-25 12:25:15.520661] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.698 [2024-11-25 12:25:15.520670] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.698 [2024-11-25 12:25:15.520672] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.698 [2024-11-25 12:25:15.520675] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x87dc00) on tqpair=0x83cd90 00:26:19.698 [2024-11-25 12:25:15.520680] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.698 [2024-11-25 12:25:15.520685] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.698 [2024-11-25 12:25:15.520687] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.698 [2024-11-25 12:25:15.520690] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x87dd80) on tqpair=0x83cd90 00:26:19.698 [2024-11-25 12:25:15.520697] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.698 [2024-11-25 12:25:15.520700] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x83cd90) 00:26:19.698 [2024-11-25 12:25:15.520705] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.698 [2024-11-25 12:25:15.520715] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x87dd80, cid 5, qid 0 00:26:19.698 [2024-11-25 12:25:15.520767] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.698 [2024-11-25 12:25:15.520773] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.698 [2024-11-25 12:25:15.520775] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.698 [2024-11-25 12:25:15.520778] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x87dd80) on tqpair=0x83cd90 00:26:19.698 [2024-11-25 12:25:15.520785] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.698 [2024-11-25 12:25:15.520787] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x83cd90) 00:26:19.698 [2024-11-25 12:25:15.520792] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.698 [2024-11-25 12:25:15.520804] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x87dd80, cid 5, qid 0 00:26:19.698 [2024-11-25 12:25:15.520846] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.698 [2024-11-25 12:25:15.520856] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.698 [2024-11-25 12:25:15.520859] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.698 [2024-11-25 12:25:15.520862] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x87dd80) on tqpair=0x83cd90 00:26:19.698 [2024-11-25 12:25:15.520869] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.698 [2024-11-25 12:25:15.520871] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x83cd90) 00:26:19.698 [2024-11-25 12:25:15.520877] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.698 [2024-11-25 12:25:15.520887] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x87dd80, cid 5, qid 0 00:26:19.698 [2024-11-25 12:25:15.520930] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.698 [2024-11-25 12:25:15.520935] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.698 [2024-11-25 12:25:15.520938] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.698 [2024-11-25 12:25:15.520940] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x87dd80) on tqpair=0x83cd90 00:26:19.698 [2024-11-25 12:25:15.520952] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.698 [2024-11-25 12:25:15.520955] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x83cd90) 00:26:19.698 [2024-11-25 12:25:15.520960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.698 [2024-11-25 12:25:15.520966] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.698 [2024-11-25 12:25:15.520968] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x83cd90) 00:26:19.698 [2024-11-25 12:25:15.520973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.698 [2024-11-25 12:25:15.520979] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.698 [2024-11-25 12:25:15.520982] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x83cd90) 00:26:19.698 [2024-11-25 12:25:15.520986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.698 [2024-11-25 12:25:15.520993] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.698 [2024-11-25 12:25:15.520995] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x83cd90) 00:26:19.698 [2024-11-25 12:25:15.521000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.698 [2024-11-25 12:25:15.521012] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x87dd80, cid 5, qid 0 00:26:19.698 [2024-11-25 12:25:15.521016] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x87dc00, cid 4, qid 0 00:26:19.698 [2024-11-25 12:25:15.521020] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x87df00, cid 6, qid 0 00:26:19.698 [2024-11-25 12:25:15.521024] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x87e080, cid 7, qid 0 00:26:19.698 ===================================================== 00:26:19.698 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:26:19.698 ===================================================== 00:26:19.698 Controller Capabilities/Features 00:26:19.698 ================================ 00:26:19.698 Vendor ID: 8086 00:26:19.698 Subsystem Vendor ID: 8086 00:26:19.698 Serial Number: SPDK00000000000001 00:26:19.698 Model Number: SPDK bdev Controller 00:26:19.698 Firmware Version: 25.01 00:26:19.698 Recommended Arb Burst: 6 00:26:19.698 IEEE OUI Identifier: e4 d2 5c 00:26:19.698 Multi-path I/O 00:26:19.698 May have multiple subsystem ports: Yes 00:26:19.698 May have multiple controllers: Yes 00:26:19.698 Associated with SR-IOV VF: No 00:26:19.698 Max Data Transfer Size: 131072 00:26:19.698 Max Number of Namespaces: 32 00:26:19.698 Max Number of I/O Queues: 127 00:26:19.698 NVMe Specification Version (VS): 1.3 00:26:19.698 NVMe Specification Version (Identify): 1.3 00:26:19.698 Maximum Queue Entries: 128 00:26:19.698 Contiguous Queues Required: Yes 00:26:19.698 Arbitration Mechanisms Supported 00:26:19.698 Weighted Round Robin: Not Supported 00:26:19.698 Vendor Specific: Not Supported 00:26:19.698 Reset Timeout: 15000 ms 00:26:19.698 Doorbell Stride: 4 bytes 00:26:19.698 NVM Subsystem Reset: Not Supported 00:26:19.698 Command Sets Supported 00:26:19.698 NVM Command Set: Supported 00:26:19.698 Boot Partition: Not Supported 00:26:19.698 Memory Page Size Minimum: 4096 bytes 00:26:19.698 Memory Page Size Maximum: 4096 bytes 00:26:19.698 Persistent Memory Region: Not Supported 00:26:19.698 Optional Asynchronous Events Supported 00:26:19.699 Namespace Attribute Notices: Supported 00:26:19.699 Firmware Activation Notices: Not Supported 00:26:19.699 ANA Change Notices: Not Supported 00:26:19.699 PLE Aggregate Log Change Notices: Not Supported 00:26:19.699 LBA Status Info Alert Notices: Not Supported 00:26:19.699 EGE Aggregate Log Change Notices: Not Supported 00:26:19.699 Normal NVM Subsystem Shutdown event: Not Supported 00:26:19.699 Zone Descriptor Change Notices: Not Supported 00:26:19.699 Discovery Log Change Notices: Not Supported 00:26:19.699 Controller Attributes 00:26:19.699 128-bit Host Identifier: Supported 00:26:19.699 Non-Operational Permissive Mode: Not Supported 00:26:19.699 NVM Sets: Not Supported 00:26:19.699 Read Recovery Levels: Not Supported 00:26:19.699 Endurance Groups: Not Supported 00:26:19.699 Predictable Latency Mode: Not Supported 00:26:19.699 Traffic Based Keep ALive: Not Supported 00:26:19.699 Namespace Granularity: Not Supported 00:26:19.699 SQ Associations: Not Supported 00:26:19.699 UUID List: Not Supported 00:26:19.699 Multi-Domain Subsystem: Not Supported 00:26:19.699 Fixed Capacity Management: Not Supported 00:26:19.699 Variable Capacity Management: Not Supported 00:26:19.699 Delete Endurance Group: Not Supported 00:26:19.699 Delete NVM Set: Not Supported 00:26:19.699 Extended LBA Formats Supported: Not Supported 00:26:19.699 Flexible Data Placement Supported: Not Supported 00:26:19.699 00:26:19.699 Controller Memory Buffer Support 00:26:19.699 ================================ 00:26:19.699 Supported: No 00:26:19.699 00:26:19.699 Persistent Memory Region Support 00:26:19.699 ================================ 00:26:19.699 Supported: No 00:26:19.699 00:26:19.699 Admin Command Set Attributes 00:26:19.699 ============================ 00:26:19.699 Security Send/Receive: Not Supported 00:26:19.699 Format NVM: Not Supported 00:26:19.699 Firmware Activate/Download: Not Supported 00:26:19.699 Namespace Management: Not Supported 00:26:19.699 Device Self-Test: Not Supported 00:26:19.699 Directives: Not Supported 00:26:19.699 NVMe-MI: Not Supported 00:26:19.699 Virtualization Management: Not Supported 00:26:19.699 Doorbell Buffer Config: Not Supported 00:26:19.699 Get LBA Status Capability: Not Supported 00:26:19.699 Command & Feature Lockdown Capability: Not Supported 00:26:19.699 Abort Command Limit: 4 00:26:19.699 Async Event Request Limit: 4 00:26:19.699 Number of Firmware Slots: N/A 00:26:19.699 Firmware Slot 1 Read-Only: N/A 00:26:19.699 Firmware Activation Without Reset: [2024-11-25 12:25:15.521152] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:19.699 [2024-11-25 12:25:15.521162] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:19.699 [2024-11-25 12:25:15.521164] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:19.699 [2024-11-25 12:25:15.521167] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x83cd90): datao=0, datal=8192, cccid=5 00:26:19.699 [2024-11-25 12:25:15.521170] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x87dd80) on tqpair(0x83cd90): expected_datao=0, payload_size=8192 00:26:19.699 [2024-11-25 12:25:15.521173] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.699 [2024-11-25 12:25:15.521185] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:19.699 [2024-11-25 12:25:15.521188] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:19.699 [2024-11-25 12:25:15.521192] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:19.699 [2024-11-25 12:25:15.521197] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:19.699 [2024-11-25 12:25:15.521199] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:19.699 [2024-11-25 12:25:15.521202] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x83cd90): datao=0, datal=512, cccid=4 00:26:19.699 [2024-11-25 12:25:15.521205] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x87dc00) on tqpair(0x83cd90): expected_datao=0, payload_size=512 00:26:19.699 [2024-11-25 12:25:15.521207] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.699 [2024-11-25 12:25:15.521212] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:19.699 [2024-11-25 12:25:15.521215] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:19.699 [2024-11-25 12:25:15.521219] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:19.699 [2024-11-25 12:25:15.521224] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:19.699 [2024-11-25 12:25:15.521226] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:19.699 [2024-11-25 12:25:15.521228] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x83cd90): datao=0, datal=512, cccid=6 00:26:19.699 [2024-11-25 12:25:15.521231] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x87df00) on tqpair(0x83cd90): expected_datao=0, payload_size=512 00:26:19.699 [2024-11-25 12:25:15.521234] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.699 [2024-11-25 12:25:15.521239] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:19.699 [2024-11-25 12:25:15.521242] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:19.699 [2024-11-25 12:25:15.521246] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:19.699 [2024-11-25 12:25:15.521250] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:19.699 [2024-11-25 12:25:15.521253] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:19.699 [2024-11-25 12:25:15.521255] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x83cd90): datao=0, datal=4096, cccid=7 00:26:19.699 [2024-11-25 12:25:15.521258] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x87e080) on tqpair(0x83cd90): expected_datao=0, payload_size=4096 00:26:19.699 [2024-11-25 12:25:15.521261] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.699 [2024-11-25 12:25:15.521266] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:19.699 [2024-11-25 12:25:15.521269] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:19.699 [2024-11-25 12:25:15.521275] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.699 [2024-11-25 12:25:15.521279] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.699 [2024-11-25 12:25:15.521282] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.699 [2024-11-25 12:25:15.521284] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x87dd80) on tqpair=0x83cd90 00:26:19.699 [2024-11-25 12:25:15.521295] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.699 [2024-11-25 12:25:15.521299] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.699 [2024-11-25 12:25:15.521302] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.699 [2024-11-25 12:25:15.521305] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x87dc00) on tqpair=0x83cd90 00:26:19.700 [2024-11-25 12:25:15.521314] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.700 [2024-11-25 12:25:15.521318] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.700 [2024-11-25 12:25:15.521321] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.700 [2024-11-25 12:25:15.521323] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x87df00) on tqpair=0x83cd90 00:26:19.700 [2024-11-25 12:25:15.521329] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.700 [2024-11-25 12:25:15.521333] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.700 [2024-11-25 12:25:15.521336] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.700 [2024-11-25 12:25:15.521338] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x87e080) on tqpair=0x83cd90 00:26:19.700 N/A 00:26:19.700 Multiple Update Detection Support: N/A 00:26:19.700 Firmware Update Granularity: No Information Provided 00:26:19.700 Per-Namespace SMART Log: No 00:26:19.700 Asymmetric Namespace Access Log Page: Not Supported 00:26:19.700 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:26:19.700 Command Effects Log Page: Supported 00:26:19.700 Get Log Page Extended Data: Supported 00:26:19.700 Telemetry Log Pages: Not Supported 00:26:19.700 Persistent Event Log Pages: Not Supported 00:26:19.700 Supported Log Pages Log Page: May Support 00:26:19.700 Commands Supported & Effects Log Page: Not Supported 00:26:19.700 Feature Identifiers & Effects Log Page:May Support 00:26:19.700 NVMe-MI Commands & Effects Log Page: May Support 00:26:19.700 Data Area 4 for Telemetry Log: Not Supported 00:26:19.700 Error Log Page Entries Supported: 128 00:26:19.700 Keep Alive: Supported 00:26:19.700 Keep Alive Granularity: 10000 ms 00:26:19.700 00:26:19.700 NVM Command Set Attributes 00:26:19.700 ========================== 00:26:19.700 Submission Queue Entry Size 00:26:19.700 Max: 64 00:26:19.700 Min: 64 00:26:19.700 Completion Queue Entry Size 00:26:19.700 Max: 16 00:26:19.700 Min: 16 00:26:19.700 Number of Namespaces: 32 00:26:19.700 Compare Command: Supported 00:26:19.700 Write Uncorrectable Command: Not Supported 00:26:19.700 Dataset Management Command: Supported 00:26:19.700 Write Zeroes Command: Supported 00:26:19.700 Set Features Save Field: Not Supported 00:26:19.700 Reservations: Supported 00:26:19.700 Timestamp: Not Supported 00:26:19.700 Copy: Supported 00:26:19.700 Volatile Write Cache: Present 00:26:19.700 Atomic Write Unit (Normal): 1 00:26:19.700 Atomic Write Unit (PFail): 1 00:26:19.700 Atomic Compare & Write Unit: 1 00:26:19.700 Fused Compare & Write: Supported 00:26:19.700 Scatter-Gather List 00:26:19.700 SGL Command Set: Supported 00:26:19.700 SGL Keyed: Supported 00:26:19.700 SGL Bit Bucket Descriptor: Not Supported 00:26:19.700 SGL Metadata Pointer: Not Supported 00:26:19.700 Oversized SGL: Not Supported 00:26:19.700 SGL Metadata Address: Not Supported 00:26:19.700 SGL Offset: Supported 00:26:19.700 Transport SGL Data Block: Not Supported 00:26:19.700 Replay Protected Memory Block: Not Supported 00:26:19.700 00:26:19.700 Firmware Slot Information 00:26:19.700 ========================= 00:26:19.700 Active slot: 1 00:26:19.700 Slot 1 Firmware Revision: 25.01 00:26:19.700 00:26:19.700 00:26:19.700 Commands Supported and Effects 00:26:19.700 ============================== 00:26:19.700 Admin Commands 00:26:19.700 -------------- 00:26:19.700 Get Log Page (02h): Supported 00:26:19.700 Identify (06h): Supported 00:26:19.700 Abort (08h): Supported 00:26:19.700 Set Features (09h): Supported 00:26:19.700 Get Features (0Ah): Supported 00:26:19.700 Asynchronous Event Request (0Ch): Supported 00:26:19.700 Keep Alive (18h): Supported 00:26:19.700 I/O Commands 00:26:19.700 ------------ 00:26:19.700 Flush (00h): Supported LBA-Change 00:26:19.700 Write (01h): Supported LBA-Change 00:26:19.700 Read (02h): Supported 00:26:19.700 Compare (05h): Supported 00:26:19.700 Write Zeroes (08h): Supported LBA-Change 00:26:19.700 Dataset Management (09h): Supported LBA-Change 00:26:19.700 Copy (19h): Supported LBA-Change 00:26:19.700 00:26:19.700 Error Log 00:26:19.700 ========= 00:26:19.700 00:26:19.700 Arbitration 00:26:19.700 =========== 00:26:19.700 Arbitration Burst: 1 00:26:19.700 00:26:19.700 Power Management 00:26:19.700 ================ 00:26:19.700 Number of Power States: 1 00:26:19.700 Current Power State: Power State #0 00:26:19.700 Power State #0: 00:26:19.700 Max Power: 0.00 W 00:26:19.700 Non-Operational State: Operational 00:26:19.700 Entry Latency: Not Reported 00:26:19.700 Exit Latency: Not Reported 00:26:19.700 Relative Read Throughput: 0 00:26:19.700 Relative Read Latency: 0 00:26:19.700 Relative Write Throughput: 0 00:26:19.700 Relative Write Latency: 0 00:26:19.700 Idle Power: Not Reported 00:26:19.700 Active Power: Not Reported 00:26:19.700 Non-Operational Permissive Mode: Not Supported 00:26:19.700 00:26:19.700 Health Information 00:26:19.700 ================== 00:26:19.700 Critical Warnings: 00:26:19.700 Available Spare Space: OK 00:26:19.700 Temperature: OK 00:26:19.700 Device Reliability: OK 00:26:19.700 Read Only: No 00:26:19.700 Volatile Memory Backup: OK 00:26:19.700 Current Temperature: 0 Kelvin (-273 Celsius) 00:26:19.700 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:26:19.700 Available Spare: 0% 00:26:19.700 Available Spare Threshold: 0% 00:26:19.700 Life Percentage Used:[2024-11-25 12:25:15.521424] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.700 [2024-11-25 12:25:15.521428] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x83cd90) 00:26:19.700 [2024-11-25 12:25:15.521433] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.700 [2024-11-25 12:25:15.521446] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x87e080, cid 7, qid 0 00:26:19.700 [2024-11-25 12:25:15.521497] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.700 [2024-11-25 12:25:15.521502] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.700 [2024-11-25 12:25:15.521504] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.700 [2024-11-25 12:25:15.521507] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x87e080) on tqpair=0x83cd90 00:26:19.700 [2024-11-25 12:25:15.521533] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:26:19.700 [2024-11-25 12:25:15.521540] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x87d600) on tqpair=0x83cd90 00:26:19.700 [2024-11-25 12:25:15.521544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.700 [2024-11-25 12:25:15.521548] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x87d780) on tqpair=0x83cd90 00:26:19.700 [2024-11-25 12:25:15.521551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.700 [2024-11-25 12:25:15.521555] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x87d900) on tqpair=0x83cd90 00:26:19.700 [2024-11-25 12:25:15.521558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.700 [2024-11-25 12:25:15.521561] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x87da80) on tqpair=0x83cd90 00:26:19.700 [2024-11-25 12:25:15.521565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.701 [2024-11-25 12:25:15.521571] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.701 [2024-11-25 12:25:15.521573] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.701 [2024-11-25 12:25:15.521576] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x83cd90) 00:26:19.701 [2024-11-25 12:25:15.521581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.701 [2024-11-25 12:25:15.521594] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x87da80, cid 3, qid 0 00:26:19.701 [2024-11-25 12:25:15.521633] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.701 [2024-11-25 12:25:15.521637] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.701 [2024-11-25 12:25:15.521640] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.701 [2024-11-25 12:25:15.521642] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x87da80) on tqpair=0x83cd90 00:26:19.701 [2024-11-25 12:25:15.521648] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.701 [2024-11-25 12:25:15.521650] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.701 [2024-11-25 12:25:15.521652] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x83cd90) 00:26:19.701 [2024-11-25 12:25:15.521658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.701 [2024-11-25 12:25:15.521670] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x87da80, cid 3, qid 0 00:26:19.701 [2024-11-25 12:25:15.521721] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.701 [2024-11-25 12:25:15.521726] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.701 [2024-11-25 12:25:15.521728] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.701 [2024-11-25 12:25:15.521730] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x87da80) on tqpair=0x83cd90 00:26:19.701 [2024-11-25 12:25:15.521734] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:26:19.701 [2024-11-25 12:25:15.521737] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:26:19.701 [2024-11-25 12:25:15.521744] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.701 [2024-11-25 12:25:15.521746] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.701 [2024-11-25 12:25:15.521760] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x83cd90) 00:26:19.701 [2024-11-25 12:25:15.521766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.701 [2024-11-25 12:25:15.521777] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x87da80, cid 3, qid 0 00:26:19.701 [2024-11-25 12:25:15.521816] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.701 [2024-11-25 12:25:15.521822] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.701 [2024-11-25 12:25:15.521824] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.701 [2024-11-25 12:25:15.521827] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x87da80) on tqpair=0x83cd90 00:26:19.701 [2024-11-25 12:25:15.521835] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.701 [2024-11-25 12:25:15.521838] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.701 [2024-11-25 12:25:15.521840] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x83cd90) 00:26:19.701 [2024-11-25 12:25:15.521846] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.701 [2024-11-25 12:25:15.521856] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x87da80, cid 3, qid 0 00:26:19.701 [2024-11-25 12:25:15.521896] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.701 [2024-11-25 12:25:15.521943] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.701 [2024-11-25 12:25:15.521945] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.701 [2024-11-25 12:25:15.521948] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x87da80) on tqpair=0x83cd90 00:26:19.701 [2024-11-25 12:25:15.521956] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.701 [2024-11-25 12:25:15.521958] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.701 [2024-11-25 12:25:15.521961] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x83cd90) 00:26:19.701 [2024-11-25 12:25:15.521966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.701 [2024-11-25 12:25:15.521977] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x87da80, cid 3, qid 0 00:26:19.701 [2024-11-25 12:25:15.522018] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.701 [2024-11-25 12:25:15.522022] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.701 [2024-11-25 12:25:15.522025] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.701 [2024-11-25 12:25:15.522028] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x87da80) on tqpair=0x83cd90 00:26:19.701 [2024-11-25 12:25:15.522035] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.701 [2024-11-25 12:25:15.522038] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.701 [2024-11-25 12:25:15.522040] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x83cd90) 00:26:19.701 [2024-11-25 12:25:15.522045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.701 [2024-11-25 12:25:15.522055] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x87da80, cid 3, qid 0 00:26:19.701 [2024-11-25 12:25:15.522094] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.701 [2024-11-25 12:25:15.522099] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.701 [2024-11-25 12:25:15.522102] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.701 [2024-11-25 12:25:15.522105] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x87da80) on tqpair=0x83cd90 00:26:19.701 [2024-11-25 12:25:15.522112] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.701 [2024-11-25 12:25:15.522115] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.701 [2024-11-25 12:25:15.522118] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x83cd90) 00:26:19.701 [2024-11-25 12:25:15.522123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.701 [2024-11-25 12:25:15.522133] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x87da80, cid 3, qid 0 00:26:19.701 [2024-11-25 12:25:15.522174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.701 [2024-11-25 12:25:15.522179] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.701 [2024-11-25 12:25:15.522181] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.701 [2024-11-25 12:25:15.522184] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x87da80) on tqpair=0x83cd90 00:26:19.701 [2024-11-25 12:25:15.522192] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.701 [2024-11-25 12:25:15.522194] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.701 [2024-11-25 12:25:15.522197] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x83cd90) 00:26:19.701 [2024-11-25 12:25:15.522202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.701 [2024-11-25 12:25:15.522212] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x87da80, cid 3, qid 0 00:26:19.701 [2024-11-25 12:25:15.522251] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.701 [2024-11-25 12:25:15.522259] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.701 [2024-11-25 12:25:15.522262] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.701 [2024-11-25 12:25:15.522265] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x87da80) on tqpair=0x83cd90 00:26:19.701 [2024-11-25 12:25:15.522272] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.701 [2024-11-25 12:25:15.522275] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.701 [2024-11-25 12:25:15.522294] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x83cd90) 00:26:19.701 [2024-11-25 12:25:15.522300] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.701 [2024-11-25 12:25:15.522311] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x87da80, cid 3, qid 0 00:26:19.701 [2024-11-25 12:25:15.522360] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.701 [2024-11-25 12:25:15.522370] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.701 [2024-11-25 12:25:15.522372] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.701 [2024-11-25 12:25:15.522375] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x87da80) on tqpair=0x83cd90 00:26:19.701 [2024-11-25 12:25:15.522383] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.701 [2024-11-25 12:25:15.522385] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.701 [2024-11-25 12:25:15.522388] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x83cd90) 00:26:19.702 [2024-11-25 12:25:15.522393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.702 [2024-11-25 12:25:15.522404] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x87da80, cid 3, qid 0 00:26:19.702 [2024-11-25 12:25:15.522443] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.702 [2024-11-25 12:25:15.522448] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.702 [2024-11-25 12:25:15.522450] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.702 [2024-11-25 12:25:15.522453] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x87da80) on tqpair=0x83cd90 00:26:19.702 [2024-11-25 12:25:15.522460] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.702 [2024-11-25 12:25:15.522463] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.702 [2024-11-25 12:25:15.522465] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x83cd90) 00:26:19.702 [2024-11-25 12:25:15.522471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.702 [2024-11-25 12:25:15.522481] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x87da80, cid 3, qid 0 00:26:19.702 [2024-11-25 12:25:15.522524] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.702 [2024-11-25 12:25:15.522533] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.702 [2024-11-25 12:25:15.522536] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.702 [2024-11-25 12:25:15.522538] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x87da80) on tqpair=0x83cd90 00:26:19.702 [2024-11-25 12:25:15.522546] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.702 [2024-11-25 12:25:15.522549] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.702 [2024-11-25 12:25:15.522551] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x83cd90) 00:26:19.702 [2024-11-25 12:25:15.522557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.702 [2024-11-25 12:25:15.522567] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x87da80, cid 3, qid 0 00:26:19.702 [2024-11-25 12:25:15.522608] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.702 [2024-11-25 12:25:15.522616] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.702 [2024-11-25 12:25:15.522619] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.702 [2024-11-25 12:25:15.522622] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x87da80) on tqpair=0x83cd90 00:26:19.702 [2024-11-25 12:25:15.522629] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.702 [2024-11-25 12:25:15.522632] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.702 [2024-11-25 12:25:15.522634] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x83cd90) 00:26:19.702 [2024-11-25 12:25:15.522640] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.702 [2024-11-25 12:25:15.522650] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x87da80, cid 3, qid 0 00:26:19.702 [2024-11-25 12:25:15.522696] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.702 [2024-11-25 12:25:15.522701] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.702 [2024-11-25 12:25:15.522703] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.702 [2024-11-25 12:25:15.522706] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x87da80) on tqpair=0x83cd90 00:26:19.702 [2024-11-25 12:25:15.522714] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.702 [2024-11-25 12:25:15.522716] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.702 [2024-11-25 12:25:15.522719] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x83cd90) 00:26:19.702 [2024-11-25 12:25:15.522724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.702 [2024-11-25 12:25:15.522734] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x87da80, cid 3, qid 0 00:26:19.702 [2024-11-25 12:25:15.526759] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.702 [2024-11-25 12:25:15.526768] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.702 [2024-11-25 12:25:15.526771] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.702 [2024-11-25 12:25:15.526774] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x87da80) on tqpair=0x83cd90 00:26:19.702 [2024-11-25 12:25:15.526782] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:19.702 [2024-11-25 12:25:15.526784] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:19.702 [2024-11-25 12:25:15.526787] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x83cd90) 00:26:19.702 [2024-11-25 12:25:15.526792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.702 [2024-11-25 12:25:15.526806] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x87da80, cid 3, qid 0 00:26:19.702 [2024-11-25 12:25:15.526845] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:19.702 [2024-11-25 12:25:15.526850] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:19.702 [2024-11-25 12:25:15.526852] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:19.702 [2024-11-25 12:25:15.526855] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x87da80) on tqpair=0x83cd90 00:26:19.702 [2024-11-25 12:25:15.526860] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 5 milliseconds 00:26:19.702 0% 00:26:19.702 Data Units Read: 0 00:26:19.702 Data Units Written: 0 00:26:19.702 Host Read Commands: 0 00:26:19.702 Host Write Commands: 0 00:26:19.702 Controller Busy Time: 0 minutes 00:26:19.702 Power Cycles: 0 00:26:19.702 Power On Hours: 0 hours 00:26:19.702 Unsafe Shutdowns: 0 00:26:19.702 Unrecoverable Media Errors: 0 00:26:19.702 Lifetime Error Log Entries: 0 00:26:19.702 Warning Temperature Time: 0 minutes 00:26:19.702 Critical Temperature Time: 0 minutes 00:26:19.702 00:26:19.702 Number of Queues 00:26:19.702 ================ 00:26:19.702 Number of I/O Submission Queues: 127 00:26:19.702 Number of I/O Completion Queues: 127 00:26:19.702 00:26:19.702 Active Namespaces 00:26:19.702 ================= 00:26:19.702 Namespace ID:1 00:26:19.702 Error Recovery Timeout: Unlimited 00:26:19.702 Command Set Identifier: NVM (00h) 00:26:19.702 Deallocate: Supported 00:26:19.702 Deallocated/Unwritten Error: Not Supported 00:26:19.702 Deallocated Read Value: Unknown 00:26:19.702 Deallocate in Write Zeroes: Not Supported 00:26:19.702 Deallocated Guard Field: 0xFFFF 00:26:19.702 Flush: Supported 00:26:19.702 Reservation: Supported 00:26:19.702 Namespace Sharing Capabilities: Multiple Controllers 00:26:19.702 Size (in LBAs): 131072 (0GiB) 00:26:19.702 Capacity (in LBAs): 131072 (0GiB) 00:26:19.702 Utilization (in LBAs): 131072 (0GiB) 00:26:19.702 NGUID: ABCDEF0123456789ABCDEF0123456789 00:26:19.702 EUI64: ABCDEF0123456789 00:26:19.702 UUID: 5d43fb33-1199-4e75-9451-daac97fcc798 00:26:19.702 Thin Provisioning: Not Supported 00:26:19.702 Per-NS Atomic Units: Yes 00:26:19.702 Atomic Boundary Size (Normal): 0 00:26:19.702 Atomic Boundary Size (PFail): 0 00:26:19.702 Atomic Boundary Offset: 0 00:26:19.702 Maximum Single Source Range Length: 65535 00:26:19.702 Maximum Copy Length: 65535 00:26:19.702 Maximum Source Range Count: 1 00:26:19.702 NGUID/EUI64 Never Reused: No 00:26:19.702 Namespace Write Protected: No 00:26:19.702 Number of LBA Formats: 1 00:26:19.702 Current LBA Format: LBA Format #00 00:26:19.702 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:19.702 00:26:19.702 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:26:19.960 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:19.960 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.960 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:19.960 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.960 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:26:19.960 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:26:19.961 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:19.961 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:26:19.961 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:19.961 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:26:19.961 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:19.961 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:19.961 rmmod nvme_tcp 00:26:19.961 rmmod nvme_fabrics 00:26:19.961 rmmod nvme_keyring 00:26:19.961 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:19.961 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:26:19.961 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:26:19.961 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 86293 ']' 00:26:19.961 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 86293 00:26:19.961 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 86293 ']' 00:26:19.961 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 86293 00:26:19.961 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:26:19.961 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:19.961 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86293 00:26:19.961 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:19.961 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:19.961 killing process with pid 86293 00:26:19.961 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86293' 00:26:19.961 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 86293 00:26:19.961 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 86293 00:26:19.961 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:19.961 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:19.961 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:19.961 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:26:19.961 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:26:19.961 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:19.961 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:26:19.961 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:19.961 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:19.961 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:19.961 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:20.219 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:20.219 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:20.219 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:20.219 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:20.219 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:20.219 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:20.219 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:20.219 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:20.219 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:20.219 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:20.219 12:25:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:20.219 12:25:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:20.219 12:25:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:20.219 12:25:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:20.219 12:25:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:20.219 12:25:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:26:20.219 00:26:20.219 real 0m2.534s 00:26:20.219 user 0m6.490s 00:26:20.219 sys 0m0.576s 00:26:20.219 12:25:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:20.219 12:25:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:26:20.219 ************************************ 00:26:20.219 END TEST nvmf_identify 00:26:20.219 ************************************ 00:26:20.219 12:25:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:20.219 12:25:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:20.219 12:25:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:20.219 12:25:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.480 ************************************ 00:26:20.480 START TEST nvmf_perf 00:26:20.480 ************************************ 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:20.480 * Looking for test storage... 00:26:20.480 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:20.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.480 --rc genhtml_branch_coverage=1 00:26:20.480 --rc genhtml_function_coverage=1 00:26:20.480 --rc genhtml_legend=1 00:26:20.480 --rc geninfo_all_blocks=1 00:26:20.480 --rc geninfo_unexecuted_blocks=1 00:26:20.480 00:26:20.480 ' 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:20.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.480 --rc genhtml_branch_coverage=1 00:26:20.480 --rc genhtml_function_coverage=1 00:26:20.480 --rc genhtml_legend=1 00:26:20.480 --rc geninfo_all_blocks=1 00:26:20.480 --rc geninfo_unexecuted_blocks=1 00:26:20.480 00:26:20.480 ' 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:20.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.480 --rc genhtml_branch_coverage=1 00:26:20.480 --rc genhtml_function_coverage=1 00:26:20.480 --rc genhtml_legend=1 00:26:20.480 --rc geninfo_all_blocks=1 00:26:20.480 --rc geninfo_unexecuted_blocks=1 00:26:20.480 00:26:20.480 ' 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:20.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:20.480 --rc genhtml_branch_coverage=1 00:26:20.480 --rc genhtml_function_coverage=1 00:26:20.480 --rc genhtml_legend=1 00:26:20.480 --rc geninfo_all_blocks=1 00:26:20.480 --rc geninfo_unexecuted_blocks=1 00:26:20.480 00:26:20.480 ' 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:20.480 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:20.480 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:20.481 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:26:20.481 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:20.481 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:20.481 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:20.481 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:20.481 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:20.481 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:20.481 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:20.481 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:20.481 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:20.481 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:20.481 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:20.481 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:20.481 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:20.481 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:20.481 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:20.481 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:20.481 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:20.481 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:20.481 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:20.481 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:20.481 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:20.481 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:20.481 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:20.481 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:20.481 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:20.481 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:20.481 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:20.481 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:20.481 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:20.481 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:20.481 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:20.481 Cannot find device "nvmf_init_br" 00:26:20.481 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:26:20.481 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:20.481 Cannot find device "nvmf_init_br2" 00:26:20.481 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:26:20.481 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:20.481 Cannot find device "nvmf_tgt_br" 00:26:20.481 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:26:20.481 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:20.481 Cannot find device "nvmf_tgt_br2" 00:26:20.481 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:26:20.481 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:20.481 Cannot find device "nvmf_init_br" 00:26:20.481 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:26:20.481 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:20.481 Cannot find device "nvmf_init_br2" 00:26:20.481 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:26:20.481 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:20.481 Cannot find device "nvmf_tgt_br" 00:26:20.481 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:26:20.481 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:20.754 Cannot find device "nvmf_tgt_br2" 00:26:20.754 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:26:20.754 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:20.754 Cannot find device "nvmf_br" 00:26:20.754 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:26:20.754 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:20.754 Cannot find device "nvmf_init_if" 00:26:20.754 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:26:20.754 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:20.754 Cannot find device "nvmf_init_if2" 00:26:20.754 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:26:20.754 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:20.754 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:20.754 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:26:20.754 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:20.754 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:20.754 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:26:20.754 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:20.754 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:20.754 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:20.754 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:20.754 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:20.754 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:20.754 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:20.754 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:20.754 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:20.754 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:20.754 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:20.754 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:20.754 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:20.754 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:20.754 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:20.754 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:20.754 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:20.754 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:20.754 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:20.754 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:20.754 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:20.754 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:20.754 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:20.754 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:20.754 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:20.754 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:20.754 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:20.754 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:20.754 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:20.754 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:20.754 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:20.754 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:20.754 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:20.754 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:20.754 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.081 ms 00:26:20.754 00:26:20.754 --- 10.0.0.3 ping statistics --- 00:26:20.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:20.754 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:26:20.754 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:20.754 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:20.754 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.070 ms 00:26:20.754 00:26:20.754 --- 10.0.0.4 ping statistics --- 00:26:20.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:20.755 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:26:20.755 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:20.755 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:20.755 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.016 ms 00:26:20.755 00:26:20.755 --- 10.0.0.1 ping statistics --- 00:26:20.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:20.755 rtt min/avg/max/mdev = 0.016/0.016/0.016/0.000 ms 00:26:20.755 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:20.755 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:20.755 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:26:20.755 00:26:20.755 --- 10.0.0.2 ping statistics --- 00:26:20.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:20.755 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:26:20.755 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:20.755 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:26:20.755 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:20.755 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:20.755 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:20.755 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:20.755 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:20.755 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:20.755 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:21.013 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:26:21.013 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:21.013 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:21.013 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:21.013 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=86567 00:26:21.013 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 86567 00:26:21.013 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 86567 ']' 00:26:21.013 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:21.013 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:21.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:21.013 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:21.013 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:21.013 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:21.013 12:25:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:21.013 [2024-11-25 12:25:16.668290] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:26:21.013 [2024-11-25 12:25:16.668337] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:21.013 [2024-11-25 12:25:16.803150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:21.013 [2024-11-25 12:25:16.833988] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:21.013 [2024-11-25 12:25:16.834023] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:21.013 [2024-11-25 12:25:16.834029] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:21.013 [2024-11-25 12:25:16.834033] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:21.013 [2024-11-25 12:25:16.834036] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:21.013 [2024-11-25 12:25:16.834638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:21.013 [2024-11-25 12:25:16.834686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:21.013 [2024-11-25 12:25:16.834736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:21.013 [2024-11-25 12:25:16.834740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:21.949 12:25:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:21.949 12:25:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:26:21.949 12:25:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:21.949 12:25:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:21.949 12:25:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:21.949 12:25:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:21.949 12:25:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:26:21.949 12:25:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:26:22.208 12:25:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:26:22.208 12:25:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:26:22.467 12:25:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:26:22.467 12:25:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:22.724 12:25:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:26:22.724 12:25:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:26:22.724 12:25:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:26:22.724 12:25:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:26:22.724 12:25:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:22.983 [2024-11-25 12:25:18.716398] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:22.983 12:25:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:23.241 12:25:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:23.241 12:25:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:23.497 12:25:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:23.497 12:25:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:23.497 12:25:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:23.754 [2024-11-25 12:25:19.537343] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:23.754 12:25:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:26:24.011 12:25:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:26:24.011 12:25:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:26:24.011 12:25:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:26:24.011 12:25:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:26:25.382 Initializing NVMe Controllers 00:26:25.382 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:26:25.382 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:26:25.382 Initialization complete. Launching workers. 00:26:25.382 ======================================================== 00:26:25.382 Latency(us) 00:26:25.382 Device Information : IOPS MiB/s Average min max 00:26:25.382 PCIE (0000:00:10.0) NSID 1 from core 0: 34368.00 134.25 930.65 251.52 5385.64 00:26:25.382 ======================================================== 00:26:25.382 Total : 34368.00 134.25 930.65 251.52 5385.64 00:26:25.382 00:26:25.382 12:25:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:26:26.314 Initializing NVMe Controllers 00:26:26.314 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:26:26.314 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:26.314 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:26.314 Initialization complete. Launching workers. 00:26:26.314 ======================================================== 00:26:26.314 Latency(us) 00:26:26.314 Device Information : IOPS MiB/s Average min max 00:26:26.314 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5201.28 20.32 192.03 75.95 6112.97 00:26:26.314 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.62 0.49 8087.58 5986.52 12012.88 00:26:26.314 ======================================================== 00:26:26.314 Total : 5325.90 20.80 376.79 75.95 12012.88 00:26:26.314 00:26:26.573 12:25:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:26:27.947 Initializing NVMe Controllers 00:26:27.947 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:26:27.947 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:27.947 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:27.947 Initialization complete. Launching workers. 00:26:27.947 ======================================================== 00:26:27.947 Latency(us) 00:26:27.947 Device Information : IOPS MiB/s Average min max 00:26:27.947 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10002.44 39.07 3199.20 523.39 8381.08 00:26:27.947 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2719.43 10.62 11886.30 7579.43 20312.65 00:26:27.947 ======================================================== 00:26:27.947 Total : 12721.87 49.69 5056.16 523.39 20312.65 00:26:27.947 00:26:27.947 12:25:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:26:27.947 12:25:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:26:30.475 Initializing NVMe Controllers 00:26:30.475 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:26:30.475 Controller IO queue size 128, less than required. 00:26:30.475 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:30.475 Controller IO queue size 128, less than required. 00:26:30.475 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:30.475 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:30.475 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:30.475 Initialization complete. Launching workers. 00:26:30.475 ======================================================== 00:26:30.475 Latency(us) 00:26:30.475 Device Information : IOPS MiB/s Average min max 00:26:30.475 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2359.82 589.95 54903.72 36316.71 95247.14 00:26:30.475 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 635.05 158.76 212866.61 65036.08 322992.20 00:26:30.475 ======================================================== 00:26:30.475 Total : 2994.86 748.72 88399.02 36316.71 322992.20 00:26:30.475 00:26:30.475 12:25:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:26:30.475 Initializing NVMe Controllers 00:26:30.475 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:26:30.475 Controller IO queue size 128, less than required. 00:26:30.475 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:30.475 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:26:30.475 Controller IO queue size 128, less than required. 00:26:30.475 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:30.475 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:26:30.475 WARNING: Some requested NVMe devices were skipped 00:26:30.476 No valid NVMe controllers or AIO or URING devices found 00:26:30.733 12:25:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:26:33.362 Initializing NVMe Controllers 00:26:33.362 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:26:33.362 Controller IO queue size 128, less than required. 00:26:33.362 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:33.362 Controller IO queue size 128, less than required. 00:26:33.362 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:33.362 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:33.362 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:33.362 Initialization complete. Launching workers. 00:26:33.362 00:26:33.362 ==================== 00:26:33.362 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:26:33.362 TCP transport: 00:26:33.362 polls: 14677 00:26:33.362 idle_polls: 7741 00:26:33.362 sock_completions: 6936 00:26:33.362 nvme_completions: 8497 00:26:33.362 submitted_requests: 12792 00:26:33.362 queued_requests: 1 00:26:33.362 00:26:33.362 ==================== 00:26:33.362 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:26:33.362 TCP transport: 00:26:33.362 polls: 14753 00:26:33.362 idle_polls: 9533 00:26:33.362 sock_completions: 5220 00:26:33.362 nvme_completions: 9773 00:26:33.362 submitted_requests: 14658 00:26:33.362 queued_requests: 1 00:26:33.362 ======================================================== 00:26:33.362 Latency(us) 00:26:33.362 Device Information : IOPS MiB/s Average min max 00:26:33.362 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2120.16 530.04 61529.48 40209.42 104201.64 00:26:33.362 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2438.58 609.64 52732.89 31731.30 86956.73 00:26:33.362 ======================================================== 00:26:33.362 Total : 4558.74 1139.68 56823.97 31731.30 104201.64 00:26:33.362 00:26:33.362 12:25:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:26:33.362 12:25:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:33.362 12:25:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:26:33.362 12:25:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:26:33.362 12:25:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:26:33.363 12:25:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:33.363 12:25:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:26:33.363 12:25:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:33.363 12:25:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:26:33.363 12:25:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:33.363 12:25:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:33.363 rmmod nvme_tcp 00:26:33.363 rmmod nvme_fabrics 00:26:33.363 rmmod nvme_keyring 00:26:33.363 12:25:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:33.363 12:25:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:26:33.363 12:25:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:26:33.363 12:25:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 86567 ']' 00:26:33.363 12:25:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 86567 00:26:33.363 12:25:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 86567 ']' 00:26:33.363 12:25:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 86567 00:26:33.363 12:25:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:26:33.363 12:25:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:33.363 12:25:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86567 00:26:33.363 killing process with pid 86567 00:26:33.363 12:25:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:33.363 12:25:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:33.363 12:25:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86567' 00:26:33.363 12:25:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 86567 00:26:33.363 12:25:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 86567 00:26:38.627 12:25:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:38.627 12:25:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:38.627 12:25:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:38.627 12:25:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:26:38.627 12:25:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:26:38.627 12:25:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:38.627 12:25:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:26:38.627 12:25:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:38.627 12:25:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:38.627 12:25:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:38.627 12:25:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:38.627 12:25:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:38.627 12:25:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:38.627 12:25:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:38.627 12:25:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:38.627 12:25:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:38.627 12:25:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:38.627 12:25:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:38.627 12:25:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:38.627 12:25:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:38.627 12:25:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:38.627 12:25:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:38.627 12:25:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:38.627 12:25:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:38.627 12:25:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:38.627 12:25:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:38.627 12:25:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:26:38.627 00:26:38.627 real 0m18.158s 00:26:38.627 user 1m0.499s 00:26:38.627 sys 0m2.928s 00:26:38.627 12:25:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:38.627 ************************************ 00:26:38.627 END TEST nvmf_perf 00:26:38.627 12:25:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:26:38.627 ************************************ 00:26:38.627 12:25:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:38.627 12:25:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:38.627 12:25:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:38.627 12:25:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.627 ************************************ 00:26:38.627 START TEST nvmf_fio_host 00:26:38.627 ************************************ 00:26:38.627 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:38.627 * Looking for test storage... 00:26:38.627 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:38.627 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:38.627 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:26:38.627 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:38.627 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:38.627 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:38.627 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:38.627 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:38.627 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:38.627 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:38.627 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:38.627 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:38.627 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:38.627 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:38.627 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:38.627 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:38.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.628 --rc genhtml_branch_coverage=1 00:26:38.628 --rc genhtml_function_coverage=1 00:26:38.628 --rc genhtml_legend=1 00:26:38.628 --rc geninfo_all_blocks=1 00:26:38.628 --rc geninfo_unexecuted_blocks=1 00:26:38.628 00:26:38.628 ' 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:38.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.628 --rc genhtml_branch_coverage=1 00:26:38.628 --rc genhtml_function_coverage=1 00:26:38.628 --rc genhtml_legend=1 00:26:38.628 --rc geninfo_all_blocks=1 00:26:38.628 --rc geninfo_unexecuted_blocks=1 00:26:38.628 00:26:38.628 ' 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:38.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.628 --rc genhtml_branch_coverage=1 00:26:38.628 --rc genhtml_function_coverage=1 00:26:38.628 --rc genhtml_legend=1 00:26:38.628 --rc geninfo_all_blocks=1 00:26:38.628 --rc geninfo_unexecuted_blocks=1 00:26:38.628 00:26:38.628 ' 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:38.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:38.628 --rc genhtml_branch_coverage=1 00:26:38.628 --rc genhtml_function_coverage=1 00:26:38.628 --rc genhtml_legend=1 00:26:38.628 --rc geninfo_all_blocks=1 00:26:38.628 --rc geninfo_unexecuted_blocks=1 00:26:38.628 00:26:38.628 ' 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:38.628 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:38.629 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:38.629 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:38.629 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:38.629 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:38.629 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:38.629 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:38.629 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:38.629 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:38.629 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:38.629 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:26:38.629 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:38.629 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:38.629 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:38.629 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:38.629 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:38.629 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:38.629 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:38.629 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:38.629 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:38.629 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:38.629 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:38.629 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:38.629 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:38.629 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:38.629 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:38.629 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:38.629 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:38.629 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:38.629 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:38.629 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:38.629 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:38.629 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:38.629 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:38.629 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:38.629 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:38.629 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:38.629 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:38.629 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:38.629 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:38.629 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:38.629 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:38.629 Cannot find device "nvmf_init_br" 00:26:38.629 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:26:38.629 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:38.888 Cannot find device "nvmf_init_br2" 00:26:38.888 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:26:38.888 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:38.888 Cannot find device "nvmf_tgt_br" 00:26:38.888 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:26:38.888 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:38.888 Cannot find device "nvmf_tgt_br2" 00:26:38.888 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:26:38.888 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:38.888 Cannot find device "nvmf_init_br" 00:26:38.888 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:26:38.888 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:38.888 Cannot find device "nvmf_init_br2" 00:26:38.888 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:26:38.888 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:38.888 Cannot find device "nvmf_tgt_br" 00:26:38.888 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:26:38.888 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:38.888 Cannot find device "nvmf_tgt_br2" 00:26:38.888 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:26:38.888 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:38.888 Cannot find device "nvmf_br" 00:26:38.888 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:26:38.888 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:38.888 Cannot find device "nvmf_init_if" 00:26:38.888 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:26:38.888 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:38.888 Cannot find device "nvmf_init_if2" 00:26:38.888 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:26:38.888 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:38.888 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:38.888 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:26:38.888 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:38.888 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:38.888 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:26:38.888 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:38.888 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:38.888 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:38.888 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:38.889 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:38.889 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:38.889 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:38.889 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:38.889 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:38.889 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:38.889 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:38.889 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:38.889 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:38.889 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:38.889 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:38.889 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:38.889 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:38.889 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:38.889 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:38.889 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:38.889 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:38.889 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:38.889 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:38.889 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:38.889 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:38.889 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:38.889 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:38.889 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:38.889 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:38.889 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:38.889 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:38.889 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:38.889 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:38.889 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:38.889 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:26:38.889 00:26:38.889 --- 10.0.0.3 ping statistics --- 00:26:38.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:38.889 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:26:38.889 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:38.889 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:38.889 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.024 ms 00:26:38.889 00:26:38.889 --- 10.0.0.4 ping statistics --- 00:26:38.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:38.889 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:26:38.889 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:38.889 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:38.889 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.015 ms 00:26:38.889 00:26:38.889 --- 10.0.0.1 ping statistics --- 00:26:38.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:38.889 rtt min/avg/max/mdev = 0.015/0.015/0.015/0.000 ms 00:26:38.889 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:38.889 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:38.889 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.027 ms 00:26:38.889 00:26:38.889 --- 10.0.0.2 ping statistics --- 00:26:38.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:38.889 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:26:38.889 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:38.889 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:26:38.889 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:38.889 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:38.889 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:38.889 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:38.889 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:38.889 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:38.889 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:39.148 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:26:39.148 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:26:39.149 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:39.149 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.149 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=87097 00:26:39.149 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:39.149 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 87097 00:26:39.149 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 87097 ']' 00:26:39.149 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:39.149 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:39.149 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:39.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:39.149 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:39.149 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:39.149 12:25:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.149 [2024-11-25 12:25:34.817617] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:26:39.149 [2024-11-25 12:25:34.817674] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:39.149 [2024-11-25 12:25:34.952831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:39.149 [2024-11-25 12:25:34.985406] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:39.149 [2024-11-25 12:25:34.985446] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:39.149 [2024-11-25 12:25:34.985451] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:39.149 [2024-11-25 12:25:34.985455] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:39.149 [2024-11-25 12:25:34.985459] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:39.149 [2024-11-25 12:25:34.986103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:39.149 [2024-11-25 12:25:34.986179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:39.149 [2024-11-25 12:25:34.986260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:39.149 [2024-11-25 12:25:34.986239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:40.083 12:25:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:40.083 12:25:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:26:40.083 12:25:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:40.083 [2024-11-25 12:25:35.788206] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:40.083 12:25:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:26:40.083 12:25:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:40.083 12:25:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.083 12:25:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:26:40.341 Malloc1 00:26:40.341 12:25:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:40.633 12:25:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:40.633 12:25:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:40.915 [2024-11-25 12:25:36.648740] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:40.915 12:25:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:26:41.173 12:25:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:26:41.173 12:25:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:26:41.173 12:25:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:26:41.173 12:25:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:41.173 12:25:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:41.173 12:25:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:41.173 12:25:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:26:41.173 12:25:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:26:41.173 12:25:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:41.173 12:25:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:41.173 12:25:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:26:41.173 12:25:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:26:41.173 12:25:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:41.173 12:25:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:26:41.173 12:25:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:26:41.173 12:25:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:41.173 12:25:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:26:41.173 12:25:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:26:41.173 12:25:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:41.173 12:25:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:26:41.173 12:25:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:26:41.173 12:25:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:26:41.173 12:25:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:26:41.173 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:26:41.173 fio-3.35 00:26:41.173 Starting 1 thread 00:26:43.703 00:26:43.703 test: (groupid=0, jobs=1): err= 0: pid=87223: Mon Nov 25 12:25:39 2024 00:26:43.703 read: IOPS=10.0k, BW=39.2MiB/s (41.1MB/s)(78.6MiB/2006msec) 00:26:43.703 slat (nsec): min=1899, max=250281, avg=2019.52, stdev=2171.30 00:26:43.703 clat (usec): min=2402, max=15539, avg=6662.15, stdev=735.42 00:26:43.703 lat (usec): min=2430, max=15541, avg=6664.17, stdev=735.29 00:26:43.703 clat percentiles (usec): 00:26:43.703 | 1.00th=[ 5669], 5.00th=[ 5997], 10.00th=[ 6128], 20.00th=[ 6259], 00:26:43.703 | 30.00th=[ 6390], 40.00th=[ 6521], 50.00th=[ 6587], 60.00th=[ 6718], 00:26:43.703 | 70.00th=[ 6783], 80.00th=[ 6915], 90.00th=[ 7111], 95.00th=[ 7308], 00:26:43.703 | 99.00th=[ 9241], 99.50th=[12256], 99.90th=[15270], 99.95th=[15401], 00:26:43.703 | 99.99th=[15533] 00:26:43.703 bw ( KiB/s): min=39056, max=40528, per=99.93%, avg=40078.00, stdev=687.02, samples=4 00:26:43.703 iops : min= 9764, max=10132, avg=10019.50, stdev=171.75, samples=4 00:26:43.703 write: IOPS=10.0k, BW=39.2MiB/s (41.1MB/s)(78.6MiB/2006msec); 0 zone resets 00:26:43.703 slat (nsec): min=1936, max=175825, avg=2082.09, stdev=1393.48 00:26:43.703 clat (usec): min=1814, max=14988, avg=6033.45, stdev=692.62 00:26:43.703 lat (usec): min=1824, max=14990, avg=6035.53, stdev=692.52 00:26:43.703 clat percentiles (usec): 00:26:43.703 | 1.00th=[ 5145], 5.00th=[ 5407], 10.00th=[ 5538], 20.00th=[ 5735], 00:26:43.703 | 30.00th=[ 5800], 40.00th=[ 5932], 50.00th=[ 5997], 60.00th=[ 6063], 00:26:43.703 | 70.00th=[ 6128], 80.00th=[ 6259], 90.00th=[ 6456], 95.00th=[ 6587], 00:26:43.703 | 99.00th=[ 8356], 99.50th=[11469], 99.90th=[14615], 99.95th=[14746], 00:26:43.703 | 99.99th=[15008] 00:26:43.703 bw ( KiB/s): min=39512, max=40712, per=100.00%, avg=40150.00, stdev=592.23, samples=4 00:26:43.703 iops : min= 9878, max=10178, avg=10037.50, stdev=148.06, samples=4 00:26:43.703 lat (msec) : 2=0.01%, 4=0.15%, 10=99.04%, 20=0.80% 00:26:43.703 cpu : usr=78.05%, sys=17.56%, ctx=46, majf=0, minf=7 00:26:43.703 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:26:43.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:43.703 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:43.703 issued rwts: total=20113,20128,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:43.703 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:43.703 00:26:43.703 Run status group 0 (all jobs): 00:26:43.703 READ: bw=39.2MiB/s (41.1MB/s), 39.2MiB/s-39.2MiB/s (41.1MB/s-41.1MB/s), io=78.6MiB (82.4MB), run=2006-2006msec 00:26:43.703 WRITE: bw=39.2MiB/s (41.1MB/s), 39.2MiB/s-39.2MiB/s (41.1MB/s-41.1MB/s), io=78.6MiB (82.4MB), run=2006-2006msec 00:26:43.704 12:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:26:43.704 12:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:26:43.704 12:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:43.704 12:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:43.704 12:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:43.704 12:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:26:43.704 12:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:26:43.704 12:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:43.704 12:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:43.704 12:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:26:43.704 12:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:26:43.704 12:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:43.704 12:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:26:43.704 12:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:26:43.704 12:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:43.704 12:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:26:43.704 12:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:26:43.704 12:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:43.704 12:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:26:43.704 12:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:26:43.704 12:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:26:43.704 12:25:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:26:43.704 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:26:43.704 fio-3.35 00:26:43.704 Starting 1 thread 00:26:46.233 00:26:46.233 test: (groupid=0, jobs=1): err= 0: pid=87271: Mon Nov 25 12:25:41 2024 00:26:46.233 read: IOPS=10.6k, BW=165MiB/s (173MB/s)(331MiB/2006msec) 00:26:46.233 slat (usec): min=2, max=114, avg= 3.22, stdev= 1.51 00:26:46.233 clat (usec): min=2011, max=13529, avg=7301.58, stdev=1720.65 00:26:46.233 lat (usec): min=2014, max=13532, avg=7304.80, stdev=1720.76 00:26:46.233 clat percentiles (usec): 00:26:46.233 | 1.00th=[ 3589], 5.00th=[ 4621], 10.00th=[ 5145], 20.00th=[ 5735], 00:26:46.233 | 30.00th=[ 6259], 40.00th=[ 6783], 50.00th=[ 7308], 60.00th=[ 7701], 00:26:46.233 | 70.00th=[ 8225], 80.00th=[ 8979], 90.00th=[ 9634], 95.00th=[ 9896], 00:26:46.233 | 99.00th=[11338], 99.50th=[11994], 99.90th=[13042], 99.95th=[13173], 00:26:46.233 | 99.99th=[13566] 00:26:46.233 bw ( KiB/s): min=78368, max=92608, per=49.91%, avg=84341.75, stdev=6027.50, samples=4 00:26:46.233 iops : min= 4898, max= 5788, avg=5271.25, stdev=376.72, samples=4 00:26:46.233 write: IOPS=6231, BW=97.4MiB/s (102MB/s)(172MiB/1768msec); 0 zone resets 00:26:46.234 slat (usec): min=28, max=333, avg=36.48, stdev= 6.54 00:26:46.234 clat (usec): min=2642, max=14515, avg=8439.07, stdev=1265.98 00:26:46.234 lat (usec): min=2670, max=14552, avg=8475.55, stdev=1266.90 00:26:46.234 clat percentiles (usec): 00:26:46.234 | 1.00th=[ 5538], 5.00th=[ 6521], 10.00th=[ 6915], 20.00th=[ 7373], 00:26:46.234 | 30.00th=[ 7767], 40.00th=[ 8094], 50.00th=[ 8455], 60.00th=[ 8717], 00:26:46.234 | 70.00th=[ 9110], 80.00th=[ 9503], 90.00th=[10028], 95.00th=[10552], 00:26:46.234 | 99.00th=[11600], 99.50th=[11994], 99.90th=[13173], 99.95th=[13566], 00:26:46.234 | 99.99th=[14484] 00:26:46.234 bw ( KiB/s): min=81920, max=96256, per=87.88%, avg=87628.25, stdev=6130.55, samples=4 00:26:46.234 iops : min= 5120, max= 6016, avg=5476.75, stdev=383.16, samples=4 00:26:46.234 lat (msec) : 4=1.40%, 10=92.23%, 20=6.37% 00:26:46.234 cpu : usr=79.25%, sys=13.47%, ctx=9, majf=0, minf=14 00:26:46.234 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:26:46.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:46.234 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:46.234 issued rwts: total=21186,11018,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:46.234 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:46.234 00:26:46.234 Run status group 0 (all jobs): 00:26:46.234 READ: bw=165MiB/s (173MB/s), 165MiB/s-165MiB/s (173MB/s-173MB/s), io=331MiB (347MB), run=2006-2006msec 00:26:46.234 WRITE: bw=97.4MiB/s (102MB/s), 97.4MiB/s-97.4MiB/s (102MB/s-102MB/s), io=172MiB (181MB), run=1768-1768msec 00:26:46.234 12:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:46.234 12:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:26:46.234 12:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:26:46.234 12:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:26:46.234 12:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:26:46.234 12:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:46.234 12:25:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:26:46.234 12:25:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:46.234 12:25:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:26:46.234 12:25:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:46.234 12:25:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:46.234 rmmod nvme_tcp 00:26:46.234 rmmod nvme_fabrics 00:26:46.234 rmmod nvme_keyring 00:26:46.234 12:25:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:46.234 12:25:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:26:46.234 12:25:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:26:46.234 12:25:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 87097 ']' 00:26:46.234 12:25:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 87097 00:26:46.234 12:25:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 87097 ']' 00:26:46.234 12:25:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 87097 00:26:46.234 12:25:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:26:46.234 12:25:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:46.234 12:25:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87097 00:26:46.234 killing process with pid 87097 00:26:46.234 12:25:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:46.234 12:25:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:46.234 12:25:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87097' 00:26:46.234 12:25:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 87097 00:26:46.234 12:25:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 87097 00:26:46.492 12:25:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:46.492 12:25:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:46.492 12:25:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:46.492 12:25:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:26:46.492 12:25:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:26:46.492 12:25:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:46.492 12:25:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:26:46.492 12:25:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:46.492 12:25:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:46.492 12:25:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:46.492 12:25:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:46.492 12:25:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:46.492 12:25:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:46.492 12:25:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:46.493 12:25:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:46.493 12:25:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:46.493 12:25:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:46.493 12:25:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:46.493 12:25:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:46.493 12:25:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:46.751 12:25:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:46.751 12:25:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:46.751 12:25:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:46.751 12:25:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:46.751 12:25:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:46.751 12:25:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:46.751 12:25:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:26:46.751 00:26:46.751 real 0m8.116s 00:26:46.751 user 0m33.284s 00:26:46.751 sys 0m1.686s 00:26:46.751 12:25:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:46.751 12:25:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.751 ************************************ 00:26:46.751 END TEST nvmf_fio_host 00:26:46.751 ************************************ 00:26:46.751 12:25:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:46.751 12:25:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:46.751 12:25:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:46.751 12:25:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.751 ************************************ 00:26:46.751 START TEST nvmf_failover 00:26:46.751 ************************************ 00:26:46.751 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:46.751 * Looking for test storage... 00:26:46.751 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:46.752 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:46.752 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:26:46.752 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:46.752 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:46.752 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:46.752 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:46.752 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:46.752 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:26:46.752 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:26:46.752 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:26:46.752 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:26:46.752 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:26:46.752 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:26:46.752 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:26:46.752 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:46.752 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:26:46.752 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:26:46.752 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:46.752 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:46.752 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:26:46.752 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:26:46.752 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:46.752 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:26:46.752 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:26:46.752 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:26:46.752 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:26:46.752 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:46.752 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:26:46.752 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:26:46.752 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:46.752 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:46.752 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:26:46.752 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:46.752 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:46.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:46.752 --rc genhtml_branch_coverage=1 00:26:46.752 --rc genhtml_function_coverage=1 00:26:46.752 --rc genhtml_legend=1 00:26:46.752 --rc geninfo_all_blocks=1 00:26:46.752 --rc geninfo_unexecuted_blocks=1 00:26:46.752 00:26:46.752 ' 00:26:46.752 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:46.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:46.752 --rc genhtml_branch_coverage=1 00:26:46.752 --rc genhtml_function_coverage=1 00:26:46.752 --rc genhtml_legend=1 00:26:46.752 --rc geninfo_all_blocks=1 00:26:46.752 --rc geninfo_unexecuted_blocks=1 00:26:46.752 00:26:46.752 ' 00:26:46.752 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:46.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:46.752 --rc genhtml_branch_coverage=1 00:26:46.752 --rc genhtml_function_coverage=1 00:26:46.752 --rc genhtml_legend=1 00:26:46.752 --rc geninfo_all_blocks=1 00:26:46.752 --rc geninfo_unexecuted_blocks=1 00:26:46.752 00:26:46.752 ' 00:26:46.752 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:46.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:46.752 --rc genhtml_branch_coverage=1 00:26:46.752 --rc genhtml_function_coverage=1 00:26:46.752 --rc genhtml_legend=1 00:26:46.752 --rc geninfo_all_blocks=1 00:26:46.752 --rc geninfo_unexecuted_blocks=1 00:26:46.752 00:26:46.752 ' 00:26:46.752 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:46.752 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:26:46.752 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:46.752 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:46.752 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:46.752 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:46.752 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:46.752 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:46.752 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:46.752 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:46.752 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:47.011 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:47.011 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:26:47.011 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:26:47.011 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:47.012 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:47.012 Cannot find device "nvmf_init_br" 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:47.012 Cannot find device "nvmf_init_br2" 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:47.012 Cannot find device "nvmf_tgt_br" 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:47.012 Cannot find device "nvmf_tgt_br2" 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:47.012 Cannot find device "nvmf_init_br" 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:47.012 Cannot find device "nvmf_init_br2" 00:26:47.012 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:26:47.013 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:47.013 Cannot find device "nvmf_tgt_br" 00:26:47.013 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:26:47.013 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:47.013 Cannot find device "nvmf_tgt_br2" 00:26:47.013 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:26:47.013 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:47.013 Cannot find device "nvmf_br" 00:26:47.013 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:26:47.013 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:47.013 Cannot find device "nvmf_init_if" 00:26:47.013 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:26:47.013 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:47.013 Cannot find device "nvmf_init_if2" 00:26:47.013 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:26:47.013 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:47.013 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:47.013 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:26:47.013 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:47.013 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:47.013 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:26:47.013 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:47.013 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:47.013 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:47.013 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:47.013 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:47.013 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:47.013 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:47.013 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:47.013 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:47.013 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:47.013 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:47.013 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:47.013 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:47.013 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:47.013 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:47.013 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:47.013 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:47.013 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:47.013 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:47.013 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:47.013 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:47.013 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:47.013 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:47.013 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:47.013 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:47.273 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:47.273 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:47.273 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:47.273 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:47.273 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:47.273 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:47.273 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:47.273 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:47.273 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:47.273 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:26:47.273 00:26:47.273 --- 10.0.0.3 ping statistics --- 00:26:47.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:47.273 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:26:47.273 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:47.273 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:47.273 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:26:47.273 00:26:47.273 --- 10.0.0.4 ping statistics --- 00:26:47.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:47.273 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:26:47.273 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:47.273 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:47.273 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:26:47.273 00:26:47.273 --- 10.0.0.1 ping statistics --- 00:26:47.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:47.273 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:26:47.273 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:47.273 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:47.273 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.035 ms 00:26:47.273 00:26:47.273 --- 10.0.0.2 ping statistics --- 00:26:47.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:47.273 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:26:47.273 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:47.273 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:26:47.273 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:47.273 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:47.273 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:47.273 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:47.273 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:47.273 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:47.273 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:47.273 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:26:47.273 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:47.273 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:47.273 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:47.273 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:47.273 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=87539 00:26:47.273 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 87539 00:26:47.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:47.273 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 87539 ']' 00:26:47.273 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:47.273 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:47.273 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:47.273 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:47.273 12:25:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:47.273 [2024-11-25 12:25:42.974430] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:26:47.273 [2024-11-25 12:25:42.974489] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:47.273 [2024-11-25 12:25:43.112000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:47.273 [2024-11-25 12:25:43.143211] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:47.273 [2024-11-25 12:25:43.143369] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:47.273 [2024-11-25 12:25:43.143420] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:47.273 [2024-11-25 12:25:43.143464] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:47.273 [2024-11-25 12:25:43.143479] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:47.273 [2024-11-25 12:25:43.144429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:47.273 [2024-11-25 12:25:43.144478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:47.273 [2024-11-25 12:25:43.144481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:48.208 12:25:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:48.208 12:25:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:48.208 12:25:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:48.208 12:25:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:48.208 12:25:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:48.208 12:25:43 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:48.208 12:25:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:48.208 [2024-11-25 12:25:44.023125] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:48.208 12:25:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:48.466 Malloc0 00:26:48.466 12:25:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:48.724 12:25:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:48.724 12:25:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:49.055 [2024-11-25 12:25:44.744008] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:49.055 12:25:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:26:49.340 [2024-11-25 12:25:44.916100] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:26:49.340 12:25:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:26:49.340 [2024-11-25 12:25:45.080265] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:26:49.340 12:25:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=87646 00:26:49.340 12:25:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:26:49.340 12:25:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:49.340 12:25:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 87646 /var/tmp/bdevperf.sock 00:26:49.341 12:25:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 87646 ']' 00:26:49.341 12:25:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:49.341 12:25:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:49.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:49.341 12:25:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:49.341 12:25:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:49.341 12:25:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:50.276 12:25:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:50.276 12:25:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:50.276 12:25:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:50.534 NVMe0n1 00:26:50.534 12:25:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:50.792 00:26:50.792 12:25:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=87694 00:26:50.792 12:25:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:26:50.792 12:25:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:51.726 12:25:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:51.984 [2024-11-25 12:25:47.676908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.984 [2024-11-25 12:25:47.676944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.984 [2024-11-25 12:25:47.676949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.984 [2024-11-25 12:25:47.676953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.984 [2024-11-25 12:25:47.676957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.984 [2024-11-25 12:25:47.676961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.984 [2024-11-25 12:25:47.676966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.984 [2024-11-25 12:25:47.676970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.984 [2024-11-25 12:25:47.676973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.984 [2024-11-25 12:25:47.676977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.984 [2024-11-25 12:25:47.676981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.984 [2024-11-25 12:25:47.676984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.984 [2024-11-25 12:25:47.676988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.984 [2024-11-25 12:25:47.676992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.984 [2024-11-25 12:25:47.676995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.984 [2024-11-25 12:25:47.676999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.984 [2024-11-25 12:25:47.677003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.984 [2024-11-25 12:25:47.677006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.984 [2024-11-25 12:25:47.677010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.984 [2024-11-25 12:25:47.677015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.984 [2024-11-25 12:25:47.677019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.984 [2024-11-25 12:25:47.677023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.984 [2024-11-25 12:25:47.677026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.984 [2024-11-25 12:25:47.677030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.984 [2024-11-25 12:25:47.677033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.984 [2024-11-25 12:25:47.677037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.984 [2024-11-25 12:25:47.677040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.984 [2024-11-25 12:25:47.677044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.984 [2024-11-25 12:25:47.677048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.984 [2024-11-25 12:25:47.677052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.984 [2024-11-25 12:25:47.677055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.984 [2024-11-25 12:25:47.677061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.984 [2024-11-25 12:25:47.677065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.985 [2024-11-25 12:25:47.677069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.985 [2024-11-25 12:25:47.677072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.985 [2024-11-25 12:25:47.677076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.985 [2024-11-25 12:25:47.677080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.985 [2024-11-25 12:25:47.677083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.985 [2024-11-25 12:25:47.677087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.985 [2024-11-25 12:25:47.677090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.985 [2024-11-25 12:25:47.677094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.985 [2024-11-25 12:25:47.677097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.985 [2024-11-25 12:25:47.677101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.985 [2024-11-25 12:25:47.677105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.985 [2024-11-25 12:25:47.677109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.985 [2024-11-25 12:25:47.677112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.985 [2024-11-25 12:25:47.677116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.985 [2024-11-25 12:25:47.677122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.985 [2024-11-25 12:25:47.677125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.985 [2024-11-25 12:25:47.677129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.985 [2024-11-25 12:25:47.677132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.985 [2024-11-25 12:25:47.677136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.985 [2024-11-25 12:25:47.677139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.985 [2024-11-25 12:25:47.677143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.985 [2024-11-25 12:25:47.677147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.985 [2024-11-25 12:25:47.677150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.985 [2024-11-25 12:25:47.677154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.985 [2024-11-25 12:25:47.677157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.985 [2024-11-25 12:25:47.677161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.985 [2024-11-25 12:25:47.677164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.985 [2024-11-25 12:25:47.677168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.985 [2024-11-25 12:25:47.677171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.985 [2024-11-25 12:25:47.677175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14ef830 is same with the state(6) to be set 00:26:51.985 12:25:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:26:55.263 12:25:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:55.263 00:26:55.263 12:25:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:26:55.520 [2024-11-25 12:25:51.188682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0600 is same with the state(6) to be set 00:26:55.520 [2024-11-25 12:25:51.188721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0600 is same with the state(6) to be set 00:26:55.520 [2024-11-25 12:25:51.188728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0600 is same with the state(6) to be set 00:26:55.520 [2024-11-25 12:25:51.188734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0600 is same with the state(6) to be set 00:26:55.520 [2024-11-25 12:25:51.188740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0600 is same with the state(6) to be set 00:26:55.520 [2024-11-25 12:25:51.188745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0600 is same with the state(6) to be set 00:26:55.520 [2024-11-25 12:25:51.188762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0600 is same with the state(6) to be set 00:26:55.520 [2024-11-25 12:25:51.188767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0600 is same with the state(6) to be set 00:26:55.520 [2024-11-25 12:25:51.188772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0600 is same with the state(6) to be set 00:26:55.520 [2024-11-25 12:25:51.188777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0600 is same with the state(6) to be set 00:26:55.520 [2024-11-25 12:25:51.188782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0600 is same with the state(6) to be set 00:26:55.521 [2024-11-25 12:25:51.188787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0600 is same with the state(6) to be set 00:26:55.521 [2024-11-25 12:25:51.188792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0600 is same with the state(6) to be set 00:26:55.521 [2024-11-25 12:25:51.188798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0600 is same with the state(6) to be set 00:26:55.521 [2024-11-25 12:25:51.188803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0600 is same with the state(6) to be set 00:26:55.521 [2024-11-25 12:25:51.188808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0600 is same with the state(6) to be set 00:26:55.521 [2024-11-25 12:25:51.188813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0600 is same with the state(6) to be set 00:26:55.521 [2024-11-25 12:25:51.188819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0600 is same with the state(6) to be set 00:26:55.521 [2024-11-25 12:25:51.188824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0600 is same with the state(6) to be set 00:26:55.521 [2024-11-25 12:25:51.188830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0600 is same with the state(6) to be set 00:26:55.521 [2024-11-25 12:25:51.188836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0600 is same with the state(6) to be set 00:26:55.521 [2024-11-25 12:25:51.188842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0600 is same with the state(6) to be set 00:26:55.521 [2024-11-25 12:25:51.188847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0600 is same with the state(6) to be set 00:26:55.521 [2024-11-25 12:25:51.188853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0600 is same with the state(6) to be set 00:26:55.521 [2024-11-25 12:25:51.188859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0600 is same with the state(6) to be set 00:26:55.521 [2024-11-25 12:25:51.188865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14f0600 is same with the state(6) to be set 00:26:55.521 12:25:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:26:58.801 12:25:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:58.801 [2024-11-25 12:25:54.403611] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:58.801 12:25:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:26:59.734 12:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:26:59.993 [2024-11-25 12:25:55.618891] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.618926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.618934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.618940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.618945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.618950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.618955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.618961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.618967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.618972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.618977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.618982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.618988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.618993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.618998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.993 [2024-11-25 12:25:55.619307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.994 [2024-11-25 12:25:55.619313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.994 [2024-11-25 12:25:55.619320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.994 [2024-11-25 12:25:55.619326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.994 [2024-11-25 12:25:55.619332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.994 [2024-11-25 12:25:55.619338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.994 [2024-11-25 12:25:55.619345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.994 [2024-11-25 12:25:55.619351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.994 [2024-11-25 12:25:55.619358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.994 [2024-11-25 12:25:55.619364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.994 [2024-11-25 12:25:55.619370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.994 [2024-11-25 12:25:55.619377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.994 [2024-11-25 12:25:55.619384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.994 [2024-11-25 12:25:55.619390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.994 [2024-11-25 12:25:55.619396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.994 [2024-11-25 12:25:55.619403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.994 [2024-11-25 12:25:55.619410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.994 [2024-11-25 12:25:55.619416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.994 [2024-11-25 12:25:55.619422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.994 [2024-11-25 12:25:55.619429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.994 [2024-11-25 12:25:55.619436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.994 [2024-11-25 12:25:55.619443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.994 [2024-11-25 12:25:55.619449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.994 [2024-11-25 12:25:55.619455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.994 [2024-11-25 12:25:55.619462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.994 [2024-11-25 12:25:55.619468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.994 [2024-11-25 12:25:55.619475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.994 [2024-11-25 12:25:55.619481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.994 [2024-11-25 12:25:55.619488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.994 [2024-11-25 12:25:55.619495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.994 [2024-11-25 12:25:55.619501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.994 [2024-11-25 12:25:55.619507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.994 [2024-11-25 12:25:55.619513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.994 [2024-11-25 12:25:55.619520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.994 [2024-11-25 12:25:55.619526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.994 [2024-11-25 12:25:55.619532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.994 [2024-11-25 12:25:55.619539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.994 [2024-11-25 12:25:55.619546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.994 [2024-11-25 12:25:55.619553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.994 [2024-11-25 12:25:55.619559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.994 [2024-11-25 12:25:55.619566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x163a990 is same with the state(6) to be set 00:26:59.994 12:25:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 87694 00:27:06.557 { 00:27:06.557 "results": [ 00:27:06.557 { 00:27:06.557 "job": "NVMe0n1", 00:27:06.557 "core_mask": "0x1", 00:27:06.557 "workload": "verify", 00:27:06.557 "status": "finished", 00:27:06.557 "verify_range": { 00:27:06.557 "start": 0, 00:27:06.557 "length": 16384 00:27:06.557 }, 00:27:06.557 "queue_depth": 128, 00:27:06.557 "io_size": 4096, 00:27:06.557 "runtime": 15.007339, 00:27:06.557 "iops": 11577.335595604258, 00:27:06.557 "mibps": 45.22396717032913, 00:27:06.557 "io_failed": 4477, 00:27:06.557 "io_timeout": 0, 00:27:06.557 "avg_latency_us": 10756.547521802971, 00:27:06.557 "min_latency_us": 393.84615384615387, 00:27:06.557 "max_latency_us": 19963.273846153847 00:27:06.557 } 00:27:06.557 ], 00:27:06.557 "core_count": 1 00:27:06.557 } 00:27:06.557 12:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 87646 00:27:06.557 12:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 87646 ']' 00:27:06.557 12:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 87646 00:27:06.557 12:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:27:06.557 12:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:06.557 12:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87646 00:27:06.557 killing process with pid 87646 00:27:06.557 12:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:06.557 12:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:06.557 12:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87646' 00:27:06.557 12:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 87646 00:27:06.557 12:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 87646 00:27:06.557 12:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:27:06.557 [2024-11-25 12:25:45.132305] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:27:06.557 [2024-11-25 12:25:45.132377] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87646 ] 00:27:06.557 [2024-11-25 12:25:45.275541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.557 [2024-11-25 12:25:45.310698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:06.557 Running I/O for 15 seconds... 00:27:06.557 12770.00 IOPS, 49.88 MiB/s [2024-11-25T12:26:02.433Z] [2024-11-25 12:25:47.677649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:110992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.557 [2024-11-25 12:25:47.677690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.557 [2024-11-25 12:25:47.677707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:111000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.557 [2024-11-25 12:25:47.677717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.557 [2024-11-25 12:25:47.677728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:111008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.557 [2024-11-25 12:25:47.677737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.557 [2024-11-25 12:25:47.677759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:111016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.557 [2024-11-25 12:25:47.677769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.557 [2024-11-25 12:25:47.677779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:111024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.557 [2024-11-25 12:25:47.677788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.557 [2024-11-25 12:25:47.677798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:111032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.557 [2024-11-25 12:25:47.677807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.557 [2024-11-25 12:25:47.677817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:111040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.557 [2024-11-25 12:25:47.677826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.557 [2024-11-25 12:25:47.677837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:111048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.557 [2024-11-25 12:25:47.677846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.557 [2024-11-25 12:25:47.677856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:111056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.557 [2024-11-25 12:25:47.677865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.557 [2024-11-25 12:25:47.677875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:111064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.557 [2024-11-25 12:25:47.677884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.557 [2024-11-25 12:25:47.677894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:111072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.557 [2024-11-25 12:25:47.677903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.557 [2024-11-25 12:25:47.677934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:111080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.557 [2024-11-25 12:25:47.677944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.557 [2024-11-25 12:25:47.677954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:111088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.557 [2024-11-25 12:25:47.677963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.557 [2024-11-25 12:25:47.677974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:111096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.557 [2024-11-25 12:25:47.677983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.557 [2024-11-25 12:25:47.677993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:111104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.557 [2024-11-25 12:25:47.678002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.557 [2024-11-25 12:25:47.678013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.557 [2024-11-25 12:25:47.678021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.557 [2024-11-25 12:25:47.678032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:111120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.557 [2024-11-25 12:25:47.678044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.557 [2024-11-25 12:25:47.678055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:111128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.557 [2024-11-25 12:25:47.678064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.557 [2024-11-25 12:25:47.678075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:111136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.557 [2024-11-25 12:25:47.678083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.557 [2024-11-25 12:25:47.678094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:111144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.557 [2024-11-25 12:25:47.678103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.557 [2024-11-25 12:25:47.678114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:111152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.557 [2024-11-25 12:25:47.678123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.557 [2024-11-25 12:25:47.678133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:111160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.558 [2024-11-25 12:25:47.678142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.558 [2024-11-25 12:25:47.678152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.558 [2024-11-25 12:25:47.678161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.558 [2024-11-25 12:25:47.678171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:111176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.558 [2024-11-25 12:25:47.678185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.558 [2024-11-25 12:25:47.678195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:111184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.558 [2024-11-25 12:25:47.678204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.558 [2024-11-25 12:25:47.678216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:111192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.558 [2024-11-25 12:25:47.678224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.558 [2024-11-25 12:25:47.678236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.558 [2024-11-25 12:25:47.678244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.558 [2024-11-25 12:25:47.678262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:111488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.558 [2024-11-25 12:25:47.678271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.558 [2024-11-25 12:25:47.678282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:111496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.558 [2024-11-25 12:25:47.678290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.558 [2024-11-25 12:25:47.678301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:111504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.558 [2024-11-25 12:25:47.678310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.558 [2024-11-25 12:25:47.678320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:111512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.558 [2024-11-25 12:25:47.678328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.558 [2024-11-25 12:25:47.678339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.558 [2024-11-25 12:25:47.678347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.558 [2024-11-25 12:25:47.678358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:111528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.558 [2024-11-25 12:25:47.678368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.558 [2024-11-25 12:25:47.678378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:111536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.558 [2024-11-25 12:25:47.678387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.558 [2024-11-25 12:25:47.678397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:111544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.558 [2024-11-25 12:25:47.678406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.558 [2024-11-25 12:25:47.678416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:111552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.558 [2024-11-25 12:25:47.678425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.558 [2024-11-25 12:25:47.678440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:111560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.558 [2024-11-25 12:25:47.678449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.558 [2024-11-25 12:25:47.678460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:111568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.558 [2024-11-25 12:25:47.678468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.558 [2024-11-25 12:25:47.678479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:111576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.558 [2024-11-25 12:25:47.678487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.558 [2024-11-25 12:25:47.678498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:111584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.558 [2024-11-25 12:25:47.678507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.558 [2024-11-25 12:25:47.678517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:111592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.558 [2024-11-25 12:25:47.678526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.558 [2024-11-25 12:25:47.678540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:111600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.558 [2024-11-25 12:25:47.678548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.558 [2024-11-25 12:25:47.678559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:111608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.558 [2024-11-25 12:25:47.678567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.558 [2024-11-25 12:25:47.678578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:111616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.558 [2024-11-25 12:25:47.678587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.558 [2024-11-25 12:25:47.678597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:111624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.558 [2024-11-25 12:25:47.678605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.558 [2024-11-25 12:25:47.678616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:111632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.558 [2024-11-25 12:25:47.678625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.558 [2024-11-25 12:25:47.678635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:111640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.558 [2024-11-25 12:25:47.678644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.558 [2024-11-25 12:25:47.678654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:111648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.558 [2024-11-25 12:25:47.678663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.558 [2024-11-25 12:25:47.678673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:111656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.558 [2024-11-25 12:25:47.678688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.558 [2024-11-25 12:25:47.678698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:111664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.558 [2024-11-25 12:25:47.678707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.558 [2024-11-25 12:25:47.678717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:111672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.558 [2024-11-25 12:25:47.678726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.558 [2024-11-25 12:25:47.678737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:111680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.558 [2024-11-25 12:25:47.678745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.558 [2024-11-25 12:25:47.678764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:111688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.558 [2024-11-25 12:25:47.678773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.558 [2024-11-25 12:25:47.678783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:111696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.558 [2024-11-25 12:25:47.678792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.558 [2024-11-25 12:25:47.678803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:111704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.558 [2024-11-25 12:25:47.678812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.558 [2024-11-25 12:25:47.678822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:111712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.558 [2024-11-25 12:25:47.678830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.558 [2024-11-25 12:25:47.678841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:111720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.558 [2024-11-25 12:25:47.678849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.558 [2024-11-25 12:25:47.678861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:111728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.558 [2024-11-25 12:25:47.678870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.558 [2024-11-25 12:25:47.678880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:111736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.558 [2024-11-25 12:25:47.678889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.558 [2024-11-25 12:25:47.678899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:111744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.558 [2024-11-25 12:25:47.678908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.558 [2024-11-25 12:25:47.678918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:111752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.558 [2024-11-25 12:25:47.678926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.558 [2024-11-25 12:25:47.678941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:111760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.559 [2024-11-25 12:25:47.678950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.559 [2024-11-25 12:25:47.678961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:111200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.559 [2024-11-25 12:25:47.678969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.559 [2024-11-25 12:25:47.678979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:111208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.559 [2024-11-25 12:25:47.678988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.559 [2024-11-25 12:25:47.678998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:111216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.559 [2024-11-25 12:25:47.679009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.559 [2024-11-25 12:25:47.679019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:111224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.559 [2024-11-25 12:25:47.679028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.559 [2024-11-25 12:25:47.679039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:111232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.559 [2024-11-25 12:25:47.679047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.559 [2024-11-25 12:25:47.679058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:111240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.559 [2024-11-25 12:25:47.679066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.559 [2024-11-25 12:25:47.679077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:111248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.559 [2024-11-25 12:25:47.679085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.559 [2024-11-25 12:25:47.679096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:111256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.559 [2024-11-25 12:25:47.679104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.559 [2024-11-25 12:25:47.679115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:111264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.559 [2024-11-25 12:25:47.679124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.559 [2024-11-25 12:25:47.679134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:111272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.559 [2024-11-25 12:25:47.679143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.559 [2024-11-25 12:25:47.679153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:111280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.559 [2024-11-25 12:25:47.679162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.559 [2024-11-25 12:25:47.679173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:111288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.559 [2024-11-25 12:25:47.679186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.559 [2024-11-25 12:25:47.679197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:111296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.559 [2024-11-25 12:25:47.679205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.559 [2024-11-25 12:25:47.679216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:111304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.559 [2024-11-25 12:25:47.679225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.559 [2024-11-25 12:25:47.679235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:111312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.559 [2024-11-25 12:25:47.679243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.559 [2024-11-25 12:25:47.679254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.559 [2024-11-25 12:25:47.679263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.559 [2024-11-25 12:25:47.679274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:111328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.559 [2024-11-25 12:25:47.679282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.559 [2024-11-25 12:25:47.679293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:111336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.559 [2024-11-25 12:25:47.679301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.559 [2024-11-25 12:25:47.679312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:111344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.559 [2024-11-25 12:25:47.679322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.559 [2024-11-25 12:25:47.679332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:111352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.559 [2024-11-25 12:25:47.679341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.559 [2024-11-25 12:25:47.679351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:111360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.559 [2024-11-25 12:25:47.679360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.559 [2024-11-25 12:25:47.679371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:111368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.559 [2024-11-25 12:25:47.679379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.559 [2024-11-25 12:25:47.679390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:111376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.559 [2024-11-25 12:25:47.679398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.559 [2024-11-25 12:25:47.679408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:111384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.559 [2024-11-25 12:25:47.679417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.559 [2024-11-25 12:25:47.679428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:111392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.559 [2024-11-25 12:25:47.679440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.559 [2024-11-25 12:25:47.679451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:111400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.559 [2024-11-25 12:25:47.679460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.559 [2024-11-25 12:25:47.679470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:111408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.559 [2024-11-25 12:25:47.679478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.559 [2024-11-25 12:25:47.679489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:111416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.559 [2024-11-25 12:25:47.679497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.559 [2024-11-25 12:25:47.679508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:111424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.559 [2024-11-25 12:25:47.679517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.559 [2024-11-25 12:25:47.679527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:111432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.559 [2024-11-25 12:25:47.679535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.559 [2024-11-25 12:25:47.679546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:111440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.559 [2024-11-25 12:25:47.679557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.559 [2024-11-25 12:25:47.679573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:111448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.559 [2024-11-25 12:25:47.679587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.559 [2024-11-25 12:25:47.679603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:111456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.559 [2024-11-25 12:25:47.679611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.559 [2024-11-25 12:25:47.679622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:111464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.559 [2024-11-25 12:25:47.679630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.559 [2024-11-25 12:25:47.679641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:111768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.559 [2024-11-25 12:25:47.679651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.559 [2024-11-25 12:25:47.679662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:111776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.559 [2024-11-25 12:25:47.679670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.559 [2024-11-25 12:25:47.679681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:111784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.559 [2024-11-25 12:25:47.679689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.559 [2024-11-25 12:25:47.679704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:111792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.559 [2024-11-25 12:25:47.679713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.559 [2024-11-25 12:25:47.679723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:111800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.559 [2024-11-25 12:25:47.679732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.559 [2024-11-25 12:25:47.679742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:111808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.560 [2024-11-25 12:25:47.679760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.560 [2024-11-25 12:25:47.679771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:111816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.560 [2024-11-25 12:25:47.679779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.560 [2024-11-25 12:25:47.679790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:111824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.560 [2024-11-25 12:25:47.679799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.560 [2024-11-25 12:25:47.679809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:111832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.560 [2024-11-25 12:25:47.679818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.560 [2024-11-25 12:25:47.679828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:111840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.560 [2024-11-25 12:25:47.679836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.560 [2024-11-25 12:25:47.679847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:111848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.560 [2024-11-25 12:25:47.679855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.560 [2024-11-25 12:25:47.679865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:111856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.560 [2024-11-25 12:25:47.679874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.560 [2024-11-25 12:25:47.679885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:111864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.560 [2024-11-25 12:25:47.679893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.560 [2024-11-25 12:25:47.679903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:111872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.560 [2024-11-25 12:25:47.679912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.560 [2024-11-25 12:25:47.679923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:111880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.560 [2024-11-25 12:25:47.679931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.560 [2024-11-25 12:25:47.679941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:111888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.560 [2024-11-25 12:25:47.679954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.560 [2024-11-25 12:25:47.679965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:111896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.560 [2024-11-25 12:25:47.679975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.560 [2024-11-25 12:25:47.679985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:111904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.560 [2024-11-25 12:25:47.679994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.560 [2024-11-25 12:25:47.680004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:111912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.560 [2024-11-25 12:25:47.680013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.560 [2024-11-25 12:25:47.680023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:111920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.560 [2024-11-25 12:25:47.680032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.560 [2024-11-25 12:25:47.680042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:111928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.560 [2024-11-25 12:25:47.680050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.560 [2024-11-25 12:25:47.680060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:111936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.560 [2024-11-25 12:25:47.680069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.560 [2024-11-25 12:25:47.680079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:111944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.560 [2024-11-25 12:25:47.680088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.560 [2024-11-25 12:25:47.680098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:111952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.560 [2024-11-25 12:25:47.680109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.560 [2024-11-25 12:25:47.680119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:111960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.560 [2024-11-25 12:25:47.680127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.560 [2024-11-25 12:25:47.680138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:111968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.560 [2024-11-25 12:25:47.680146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.560 [2024-11-25 12:25:47.680157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:111976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.560 [2024-11-25 12:25:47.680165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.560 [2024-11-25 12:25:47.680176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:111984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.560 [2024-11-25 12:25:47.680184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.560 [2024-11-25 12:25:47.680198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:111992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.560 [2024-11-25 12:25:47.680207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.560 [2024-11-25 12:25:47.680217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:112000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.560 [2024-11-25 12:25:47.680226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.560 [2024-11-25 12:25:47.680236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:112008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.560 [2024-11-25 12:25:47.680245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.560 [2024-11-25 12:25:47.680268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:06.560 [2024-11-25 12:25:47.680276] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.560 [2024-11-25 12:25:47.680283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111472 len:8 PRP1 0x0 PRP2 0x0 00:27:06.560 [2024-11-25 12:25:47.680293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.560 [2024-11-25 12:25:47.680332] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:27:06.560 [2024-11-25 12:25:47.680371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.560 [2024-11-25 12:25:47.680382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.560 [2024-11-25 12:25:47.680392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.560 [2024-11-25 12:25:47.680401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.560 [2024-11-25 12:25:47.680410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.560 [2024-11-25 12:25:47.680418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.560 [2024-11-25 12:25:47.680427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.560 [2024-11-25 12:25:47.680436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.560 [2024-11-25 12:25:47.680445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:27:06.560 [2024-11-25 12:25:47.680473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ff30 (9): Bad file descriptor 00:27:06.560 [2024-11-25 12:25:47.683761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:27:06.560 [2024-11-25 12:25:47.709399] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:27:06.560 12409.50 IOPS, 48.47 MiB/s [2024-11-25T12:26:02.436Z] 12465.00 IOPS, 48.69 MiB/s [2024-11-25T12:26:02.436Z] 12446.75 IOPS, 48.62 MiB/s [2024-11-25T12:26:02.436Z] [2024-11-25 12:25:51.190322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:74384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.560 [2024-11-25 12:25:51.190363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.560 [2024-11-25 12:25:51.190379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:74392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.560 [2024-11-25 12:25:51.190406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.560 [2024-11-25 12:25:51.190416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:74400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.560 [2024-11-25 12:25:51.190423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.560 [2024-11-25 12:25:51.190431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.560 [2024-11-25 12:25:51.190438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.560 [2024-11-25 12:25:51.190447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:74416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.560 [2024-11-25 12:25:51.190455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.560 [2024-11-25 12:25:51.190463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:74424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.561 [2024-11-25 12:25:51.190470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.561 [2024-11-25 12:25:51.190478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:74432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.561 [2024-11-25 12:25:51.190485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.561 [2024-11-25 12:25:51.190493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:74440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.561 [2024-11-25 12:25:51.190500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.561 [2024-11-25 12:25:51.190508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:74448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.561 [2024-11-25 12:25:51.190515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.561 [2024-11-25 12:25:51.190523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:74456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.561 [2024-11-25 12:25:51.190530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.561 [2024-11-25 12:25:51.190538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:74464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.561 [2024-11-25 12:25:51.190545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.561 [2024-11-25 12:25:51.190554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:74472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.561 [2024-11-25 12:25:51.190560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.561 [2024-11-25 12:25:51.190568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:74480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.561 [2024-11-25 12:25:51.190575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.561 [2024-11-25 12:25:51.190583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.561 [2024-11-25 12:25:51.190590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.561 [2024-11-25 12:25:51.190602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:74496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.561 [2024-11-25 12:25:51.190609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.561 [2024-11-25 12:25:51.190617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:74504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.561 [2024-11-25 12:25:51.190624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.561 [2024-11-25 12:25:51.190632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:74512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.561 [2024-11-25 12:25:51.190639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.561 [2024-11-25 12:25:51.190647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:74520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.561 [2024-11-25 12:25:51.190654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.561 [2024-11-25 12:25:51.190662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:74528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.561 [2024-11-25 12:25:51.190669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.561 [2024-11-25 12:25:51.190677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:74536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.561 [2024-11-25 12:25:51.190684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.561 [2024-11-25 12:25:51.190694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.561 [2024-11-25 12:25:51.190701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.561 [2024-11-25 12:25:51.190709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:74552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.561 [2024-11-25 12:25:51.190716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.561 [2024-11-25 12:25:51.190725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:74560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.561 [2024-11-25 12:25:51.190732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.561 [2024-11-25 12:25:51.190740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:74568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.561 [2024-11-25 12:25:51.190747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.561 [2024-11-25 12:25:51.190763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:74576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.561 [2024-11-25 12:25:51.190770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.561 [2024-11-25 12:25:51.190778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:74584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.561 [2024-11-25 12:25:51.190785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.561 [2024-11-25 12:25:51.190793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:74592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.561 [2024-11-25 12:25:51.190804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.561 [2024-11-25 12:25:51.190813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:74600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.561 [2024-11-25 12:25:51.190820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.561 [2024-11-25 12:25:51.190828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:74608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.561 [2024-11-25 12:25:51.190835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.561 [2024-11-25 12:25:51.190843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:74616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.561 [2024-11-25 12:25:51.190850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.561 [2024-11-25 12:25:51.190858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.561 [2024-11-25 12:25:51.190865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.561 [2024-11-25 12:25:51.190873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:74632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.561 [2024-11-25 12:25:51.190880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.561 [2024-11-25 12:25:51.190888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:74640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.561 [2024-11-25 12:25:51.190895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.561 [2024-11-25 12:25:51.190903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:74648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.561 [2024-11-25 12:25:51.190910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.561 [2024-11-25 12:25:51.190919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.561 [2024-11-25 12:25:51.190925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.561 [2024-11-25 12:25:51.190933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.561 [2024-11-25 12:25:51.190940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.561 [2024-11-25 12:25:51.190949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:74672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.561 [2024-11-25 12:25:51.190956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.561 [2024-11-25 12:25:51.190965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:74680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.561 [2024-11-25 12:25:51.190972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.561 [2024-11-25 12:25:51.190980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:74688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.561 [2024-11-25 12:25:51.190987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.562 [2024-11-25 12:25:51.190995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:74696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.562 [2024-11-25 12:25:51.191005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.562 [2024-11-25 12:25:51.191013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:74704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.562 [2024-11-25 12:25:51.191020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.562 [2024-11-25 12:25:51.191028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:74712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.562 [2024-11-25 12:25:51.191035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.562 [2024-11-25 12:25:51.191044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.562 [2024-11-25 12:25:51.191051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.562 [2024-11-25 12:25:51.191059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:74728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.562 [2024-11-25 12:25:51.191066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.562 [2024-11-25 12:25:51.191074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:74736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.562 [2024-11-25 12:25:51.191081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.562 [2024-11-25 12:25:51.191089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:74744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.562 [2024-11-25 12:25:51.191096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.562 [2024-11-25 12:25:51.191104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.562 [2024-11-25 12:25:51.191111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.562 [2024-11-25 12:25:51.191120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.562 [2024-11-25 12:25:51.191127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.562 [2024-11-25 12:25:51.191135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:74768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.562 [2024-11-25 12:25:51.191142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.562 [2024-11-25 12:25:51.191150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:74776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.562 [2024-11-25 12:25:51.191157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.562 [2024-11-25 12:25:51.191165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:74784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.562 [2024-11-25 12:25:51.191172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.562 [2024-11-25 12:25:51.191180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.562 [2024-11-25 12:25:51.191187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.562 [2024-11-25 12:25:51.191198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:74800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.562 [2024-11-25 12:25:51.191205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.562 [2024-11-25 12:25:51.191213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:74808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.562 [2024-11-25 12:25:51.191220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.562 [2024-11-25 12:25:51.191229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:74816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.562 [2024-11-25 12:25:51.191235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.562 [2024-11-25 12:25:51.191243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:74824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.562 [2024-11-25 12:25:51.191250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.562 [2024-11-25 12:25:51.191258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:74832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.562 [2024-11-25 12:25:51.191265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.562 [2024-11-25 12:25:51.191273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:74840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.562 [2024-11-25 12:25:51.191280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.562 [2024-11-25 12:25:51.191288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:74848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.562 [2024-11-25 12:25:51.191295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.562 [2024-11-25 12:25:51.191303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.562 [2024-11-25 12:25:51.191309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.562 [2024-11-25 12:25:51.191317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:74864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.562 [2024-11-25 12:25:51.191324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.562 [2024-11-25 12:25:51.191333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:74872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.562 [2024-11-25 12:25:51.191340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.562 [2024-11-25 12:25:51.191348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:74880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.562 [2024-11-25 12:25:51.191355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.562 [2024-11-25 12:25:51.191363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:74888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.562 [2024-11-25 12:25:51.191370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.562 [2024-11-25 12:25:51.191378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:74896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.562 [2024-11-25 12:25:51.191387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.562 [2024-11-25 12:25:51.191396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:74904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.562 [2024-11-25 12:25:51.191403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.562 [2024-11-25 12:25:51.191411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.562 [2024-11-25 12:25:51.191418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.562 [2024-11-25 12:25:51.191426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:74920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.562 [2024-11-25 12:25:51.191434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.562 [2024-11-25 12:25:51.191441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:74928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.562 [2024-11-25 12:25:51.191448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.562 [2024-11-25 12:25:51.191456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:74936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.562 [2024-11-25 12:25:51.191463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.562 [2024-11-25 12:25:51.191472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:74944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.562 [2024-11-25 12:25:51.191478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.562 [2024-11-25 12:25:51.191486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:74952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.562 [2024-11-25 12:25:51.191493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.562 [2024-11-25 12:25:51.191501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:74960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.562 [2024-11-25 12:25:51.191508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.562 [2024-11-25 12:25:51.191517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:74968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.562 [2024-11-25 12:25:51.191523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.562 [2024-11-25 12:25:51.191531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:74976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.562 [2024-11-25 12:25:51.191538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.562 [2024-11-25 12:25:51.191546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:74984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.562 [2024-11-25 12:25:51.191553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.562 [2024-11-25 12:25:51.191572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.562 [2024-11-25 12:25:51.191579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74992 len:8 PRP1 0x0 PRP2 0x0 00:27:06.562 [2024-11-25 12:25:51.191586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.562 [2024-11-25 12:25:51.191599] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:06.562 [2024-11-25 12:25:51.191605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.562 [2024-11-25 12:25:51.191610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75000 len:8 PRP1 0x0 PRP2 0x0 00:27:06.562 [2024-11-25 12:25:51.191617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.563 [2024-11-25 12:25:51.191624] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:06.563 [2024-11-25 12:25:51.191629] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.563 [2024-11-25 12:25:51.191634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75008 len:8 PRP1 0x0 PRP2 0x0 00:27:06.563 [2024-11-25 12:25:51.191640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.563 [2024-11-25 12:25:51.191648] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:06.563 [2024-11-25 12:25:51.191652] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.563 [2024-11-25 12:25:51.191658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75016 len:8 PRP1 0x0 PRP2 0x0 00:27:06.563 [2024-11-25 12:25:51.191665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.563 [2024-11-25 12:25:51.191672] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:06.563 [2024-11-25 12:25:51.191676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.563 [2024-11-25 12:25:51.191682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75024 len:8 PRP1 0x0 PRP2 0x0 00:27:06.563 [2024-11-25 12:25:51.191688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.563 [2024-11-25 12:25:51.191695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:06.563 [2024-11-25 12:25:51.191700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.563 [2024-11-25 12:25:51.191705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75032 len:8 PRP1 0x0 PRP2 0x0 00:27:06.563 [2024-11-25 12:25:51.191712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.563 [2024-11-25 12:25:51.191718] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:06.563 [2024-11-25 12:25:51.191723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.563 [2024-11-25 12:25:51.191728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75040 len:8 PRP1 0x0 PRP2 0x0 00:27:06.563 [2024-11-25 12:25:51.191735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.563 [2024-11-25 12:25:51.191742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:06.563 [2024-11-25 12:25:51.191747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.563 [2024-11-25 12:25:51.191759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75048 len:8 PRP1 0x0 PRP2 0x0 00:27:06.563 [2024-11-25 12:25:51.191766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.563 [2024-11-25 12:25:51.191773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:06.563 [2024-11-25 12:25:51.191777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.563 [2024-11-25 12:25:51.191783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75056 len:8 PRP1 0x0 PRP2 0x0 00:27:06.563 [2024-11-25 12:25:51.191792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.563 [2024-11-25 12:25:51.191800] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:06.563 [2024-11-25 12:25:51.191804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.563 [2024-11-25 12:25:51.191809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75064 len:8 PRP1 0x0 PRP2 0x0 00:27:06.563 [2024-11-25 12:25:51.191816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.563 [2024-11-25 12:25:51.191823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:06.563 [2024-11-25 12:25:51.191828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.563 [2024-11-25 12:25:51.191833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75072 len:8 PRP1 0x0 PRP2 0x0 00:27:06.563 [2024-11-25 12:25:51.191839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.563 [2024-11-25 12:25:51.191846] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:06.563 [2024-11-25 12:25:51.191851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.563 [2024-11-25 12:25:51.191860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75080 len:8 PRP1 0x0 PRP2 0x0 00:27:06.563 [2024-11-25 12:25:51.191867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.563 [2024-11-25 12:25:51.191874] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:06.563 [2024-11-25 12:25:51.191878] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.563 [2024-11-25 12:25:51.191883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75088 len:8 PRP1 0x0 PRP2 0x0 00:27:06.563 [2024-11-25 12:25:51.191890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.563 [2024-11-25 12:25:51.191897] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:06.563 [2024-11-25 12:25:51.191902] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.563 [2024-11-25 12:25:51.191907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75096 len:8 PRP1 0x0 PRP2 0x0 00:27:06.563 [2024-11-25 12:25:51.191914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.563 [2024-11-25 12:25:51.191921] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:06.563 [2024-11-25 12:25:51.191925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.563 [2024-11-25 12:25:51.191931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75104 len:8 PRP1 0x0 PRP2 0x0 00:27:06.563 [2024-11-25 12:25:51.191937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.563 [2024-11-25 12:25:51.191944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:06.563 [2024-11-25 12:25:51.191949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.563 [2024-11-25 12:25:51.191954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75112 len:8 PRP1 0x0 PRP2 0x0 00:27:06.563 [2024-11-25 12:25:51.191961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.563 [2024-11-25 12:25:51.191967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:06.563 [2024-11-25 12:25:51.191972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.563 [2024-11-25 12:25:51.191980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75120 len:8 PRP1 0x0 PRP2 0x0 00:27:06.563 [2024-11-25 12:25:51.191987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.563 [2024-11-25 12:25:51.191994] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:06.563 [2024-11-25 12:25:51.191999] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.563 [2024-11-25 12:25:51.192004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75128 len:8 PRP1 0x0 PRP2 0x0 00:27:06.563 [2024-11-25 12:25:51.192010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.563 [2024-11-25 12:25:51.192017] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:06.563 [2024-11-25 12:25:51.192022] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.563 [2024-11-25 12:25:51.192027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75136 len:8 PRP1 0x0 PRP2 0x0 00:27:06.563 [2024-11-25 12:25:51.192033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.563 [2024-11-25 12:25:51.192041] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:06.563 [2024-11-25 12:25:51.192046] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.563 [2024-11-25 12:25:51.192052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75144 len:8 PRP1 0x0 PRP2 0x0 00:27:06.563 [2024-11-25 12:25:51.192059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.563 [2024-11-25 12:25:51.192066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:06.563 [2024-11-25 12:25:51.192071] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.563 [2024-11-25 12:25:51.192076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75152 len:8 PRP1 0x0 PRP2 0x0 00:27:06.563 [2024-11-25 12:25:51.192082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.563 [2024-11-25 12:25:51.192090] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:06.563 [2024-11-25 12:25:51.192094] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.563 [2024-11-25 12:25:51.192099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75160 len:8 PRP1 0x0 PRP2 0x0 00:27:06.563 [2024-11-25 12:25:51.192106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.563 [2024-11-25 12:25:51.192113] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:06.563 [2024-11-25 12:25:51.192118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.563 [2024-11-25 12:25:51.192123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75168 len:8 PRP1 0x0 PRP2 0x0 00:27:06.563 [2024-11-25 12:25:51.192130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.563 [2024-11-25 12:25:51.192137] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:06.563 [2024-11-25 12:25:51.192142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.563 [2024-11-25 12:25:51.192147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75176 len:8 PRP1 0x0 PRP2 0x0 00:27:06.563 [2024-11-25 12:25:51.192153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.563 [2024-11-25 12:25:51.192166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:06.563 [2024-11-25 12:25:51.192171] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.563 [2024-11-25 12:25:51.192176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75184 len:8 PRP1 0x0 PRP2 0x0 00:27:06.563 [2024-11-25 12:25:51.192182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.563 [2024-11-25 12:25:51.192189] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:06.563 [2024-11-25 12:25:51.192194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.564 [2024-11-25 12:25:51.192199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75192 len:8 PRP1 0x0 PRP2 0x0 00:27:06.564 [2024-11-25 12:25:51.192206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.564 [2024-11-25 12:25:51.192213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:06.564 [2024-11-25 12:25:51.192218] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.564 [2024-11-25 12:25:51.192223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75200 len:8 PRP1 0x0 PRP2 0x0 00:27:06.564 [2024-11-25 12:25:51.192229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.564 [2024-11-25 12:25:51.192236] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:06.564 [2024-11-25 12:25:51.192241] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.564 [2024-11-25 12:25:51.192246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75208 len:8 PRP1 0x0 PRP2 0x0 00:27:06.564 [2024-11-25 12:25:51.192253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.564 [2024-11-25 12:25:51.192260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:06.564 [2024-11-25 12:25:51.192264] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.564 [2024-11-25 12:25:51.192269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75216 len:8 PRP1 0x0 PRP2 0x0 00:27:06.564 [2024-11-25 12:25:51.192276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.564 [2024-11-25 12:25:51.192283] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:06.564 [2024-11-25 12:25:51.192287] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.564 [2024-11-25 12:25:51.192293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75224 len:8 PRP1 0x0 PRP2 0x0 00:27:06.564 [2024-11-25 12:25:51.192299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.564 [2024-11-25 12:25:51.192306] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:06.564 [2024-11-25 12:25:51.192311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.564 [2024-11-25 12:25:51.192316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75232 len:8 PRP1 0x0 PRP2 0x0 00:27:06.564 [2024-11-25 12:25:51.192322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.564 [2024-11-25 12:25:51.192329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:06.564 [2024-11-25 12:25:51.192334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.564 [2024-11-25 12:25:51.192339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75240 len:8 PRP1 0x0 PRP2 0x0 00:27:06.564 [2024-11-25 12:25:51.192348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.564 [2024-11-25 12:25:51.192358] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:06.564 [2024-11-25 12:25:51.192363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.564 [2024-11-25 12:25:51.192368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75248 len:8 PRP1 0x0 PRP2 0x0 00:27:06.564 [2024-11-25 12:25:51.192375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.564 [2024-11-25 12:25:51.192381] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:06.564 [2024-11-25 12:25:51.192386] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.564 [2024-11-25 12:25:51.192391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75256 len:8 PRP1 0x0 PRP2 0x0 00:27:06.564 [2024-11-25 12:25:51.192398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.564 [2024-11-25 12:25:51.192405] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:06.564 [2024-11-25 12:25:51.192409] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.564 [2024-11-25 12:25:51.192414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75264 len:8 PRP1 0x0 PRP2 0x0 00:27:06.564 [2024-11-25 12:25:51.192421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.564 [2024-11-25 12:25:51.192428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:06.564 [2024-11-25 12:25:51.192432] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.564 [2024-11-25 12:25:51.192438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75272 len:8 PRP1 0x0 PRP2 0x0 00:27:06.564 [2024-11-25 12:25:51.192445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.564 [2024-11-25 12:25:51.192452] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:06.564 [2024-11-25 12:25:51.192456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.564 [2024-11-25 12:25:51.192462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75280 len:8 PRP1 0x0 PRP2 0x0 00:27:06.564 [2024-11-25 12:25:51.192468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.564 [2024-11-25 12:25:51.192475] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:06.564 [2024-11-25 12:25:51.192480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.564 [2024-11-25 12:25:51.192485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75288 len:8 PRP1 0x0 PRP2 0x0 00:27:06.564 [2024-11-25 12:25:51.192491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.564 [2024-11-25 12:25:51.192499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:06.564 [2024-11-25 12:25:51.192503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.564 [2024-11-25 12:25:51.192508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75296 len:8 PRP1 0x0 PRP2 0x0 00:27:06.564 [2024-11-25 12:25:51.192515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.564 [2024-11-25 12:25:51.192522] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:06.564 [2024-11-25 12:25:51.192526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.564 [2024-11-25 12:25:51.192535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75304 len:8 PRP1 0x0 PRP2 0x0 00:27:06.564 [2024-11-25 12:25:51.192542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.564 [2024-11-25 12:25:51.192550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:06.564 [2024-11-25 12:25:51.192555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.564 [2024-11-25 12:25:51.192561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75312 len:8 PRP1 0x0 PRP2 0x0 00:27:06.564 [2024-11-25 12:25:51.192568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.564 [2024-11-25 12:25:51.192575] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:06.564 [2024-11-25 12:25:51.192580] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.564 [2024-11-25 12:25:51.192584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75320 len:8 PRP1 0x0 PRP2 0x0 00:27:06.564 [2024-11-25 12:25:51.192591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.564 [2024-11-25 12:25:51.192598] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:06.564 [2024-11-25 12:25:51.192603] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.564 [2024-11-25 12:25:51.192608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75328 len:8 PRP1 0x0 PRP2 0x0 00:27:06.564 [2024-11-25 12:25:51.192615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.564 [2024-11-25 12:25:51.192621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:06.564 [2024-11-25 12:25:51.192626] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.564 [2024-11-25 12:25:51.192632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75336 len:8 PRP1 0x0 PRP2 0x0 00:27:06.564 [2024-11-25 12:25:51.192639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.564 [2024-11-25 12:25:51.192645] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:06.564 [2024-11-25 12:25:51.192650] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.564 [2024-11-25 12:25:51.192655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75344 len:8 PRP1 0x0 PRP2 0x0 00:27:06.564 [2024-11-25 12:25:51.192662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.564 [2024-11-25 12:25:51.192669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:06.564 [2024-11-25 12:25:51.192674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.564 [2024-11-25 12:25:51.192679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75352 len:8 PRP1 0x0 PRP2 0x0 00:27:06.564 [2024-11-25 12:25:51.192685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.564 [2024-11-25 12:25:51.192692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:06.564 [2024-11-25 12:25:51.192697] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.564 [2024-11-25 12:25:51.192702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75360 len:8 PRP1 0x0 PRP2 0x0 00:27:06.564 [2024-11-25 12:25:51.192709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.564 [2024-11-25 12:25:51.192716] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:06.564 [2024-11-25 12:25:51.192723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.564 [2024-11-25 12:25:51.192729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75368 len:8 PRP1 0x0 PRP2 0x0 00:27:06.564 [2024-11-25 12:25:51.192735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.564 [2024-11-25 12:25:51.192743] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:06.564 [2024-11-25 12:25:51.192755] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.564 [2024-11-25 12:25:51.192760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75376 len:8 PRP1 0x0 PRP2 0x0 00:27:06.564 [2024-11-25 12:25:51.192767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.565 [2024-11-25 12:25:51.192774] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:06.565 [2024-11-25 12:25:51.192779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.565 [2024-11-25 12:25:51.192784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75384 len:8 PRP1 0x0 PRP2 0x0 00:27:06.565 [2024-11-25 12:25:51.192790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.565 [2024-11-25 12:25:51.192797] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:06.565 [2024-11-25 12:25:51.192802] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.565 [2024-11-25 12:25:51.192807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75392 len:8 PRP1 0x0 PRP2 0x0 00:27:06.565 [2024-11-25 12:25:51.192813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.565 [2024-11-25 12:25:51.192820] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:06.565 [2024-11-25 12:25:51.192825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.565 [2024-11-25 12:25:51.192830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:75400 len:8 PRP1 0x0 PRP2 0x0 00:27:06.565 [2024-11-25 12:25:51.192836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.565 [2024-11-25 12:25:51.192870] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:27:06.565 [2024-11-25 12:25:51.192904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.565 [2024-11-25 12:25:51.192912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.565 [2024-11-25 12:25:51.192920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.565 [2024-11-25 12:25:51.192927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.565 [2024-11-25 12:25:51.192935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.565 [2024-11-25 12:25:51.192941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.565 [2024-11-25 12:25:51.192949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.565 [2024-11-25 12:25:51.192955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.565 [2024-11-25 12:25:51.192962] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:27:06.565 [2024-11-25 12:25:51.192997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ff30 (9): Bad file descriptor 00:27:06.565 [2024-11-25 12:25:51.195557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:27:06.565 [2024-11-25 12:25:51.223740] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:27:06.565 12363.00 IOPS, 48.29 MiB/s [2024-11-25T12:26:02.441Z] 12377.33 IOPS, 48.35 MiB/s [2024-11-25T12:26:02.441Z] 12415.43 IOPS, 48.50 MiB/s [2024-11-25T12:26:02.441Z] 12454.25 IOPS, 48.65 MiB/s [2024-11-25T12:26:02.441Z] 12492.67 IOPS, 48.80 MiB/s [2024-11-25T12:26:02.441Z] [2024-11-25 12:25:55.619633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.565 [2024-11-25 12:25:55.619661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.565 [2024-11-25 12:25:55.619675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:7736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.565 [2024-11-25 12:25:55.619683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.565 [2024-11-25 12:25:55.619692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:7744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.565 [2024-11-25 12:25:55.619699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.565 [2024-11-25 12:25:55.619708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.565 [2024-11-25 12:25:55.619715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.565 [2024-11-25 12:25:55.619723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:7760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.565 [2024-11-25 12:25:55.619730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.565 [2024-11-25 12:25:55.619738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.565 [2024-11-25 12:25:55.619745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.565 [2024-11-25 12:25:55.619764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:7776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.565 [2024-11-25 12:25:55.619771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.565 [2024-11-25 12:25:55.619779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:7784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.565 [2024-11-25 12:25:55.619786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.565 [2024-11-25 12:25:55.619794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:7792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.565 [2024-11-25 12:25:55.619801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.565 [2024-11-25 12:25:55.619810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.565 [2024-11-25 12:25:55.619817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.565 [2024-11-25 12:25:55.619825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.565 [2024-11-25 12:25:55.619832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.565 [2024-11-25 12:25:55.619855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.565 [2024-11-25 12:25:55.619863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.565 [2024-11-25 12:25:55.619871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.565 [2024-11-25 12:25:55.619878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.565 [2024-11-25 12:25:55.619886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:7832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.565 [2024-11-25 12:25:55.619893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.565 [2024-11-25 12:25:55.619901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:7840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.565 [2024-11-25 12:25:55.619908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.565 [2024-11-25 12:25:55.619916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:7848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.565 [2024-11-25 12:25:55.619923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.565 [2024-11-25 12:25:55.619931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.565 [2024-11-25 12:25:55.619939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.565 [2024-11-25 12:25:55.619948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.565 [2024-11-25 12:25:55.619954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.565 [2024-11-25 12:25:55.619963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.565 [2024-11-25 12:25:55.619969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.565 [2024-11-25 12:25:55.619978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.565 [2024-11-25 12:25:55.619984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.565 [2024-11-25 12:25:55.619993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:7888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.565 [2024-11-25 12:25:55.620000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.565 [2024-11-25 12:25:55.620009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.565 [2024-11-25 12:25:55.620015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.565 [2024-11-25 12:25:55.620024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.565 [2024-11-25 12:25:55.620030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.565 [2024-11-25 12:25:55.620039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.565 [2024-11-25 12:25:55.620050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.565 [2024-11-25 12:25:55.620059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:7920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.565 [2024-11-25 12:25:55.620066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.565 [2024-11-25 12:25:55.620075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.565 [2024-11-25 12:25:55.620082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.565 [2024-11-25 12:25:55.620090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:7936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.565 [2024-11-25 12:25:55.620097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.565 [2024-11-25 12:25:55.620105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.565 [2024-11-25 12:25:55.620112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.566 [2024-11-25 12:25:55.620121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.566 [2024-11-25 12:25:55.620127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.566 [2024-11-25 12:25:55.620136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.566 [2024-11-25 12:25:55.620143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.566 [2024-11-25 12:25:55.620152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:7968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.566 [2024-11-25 12:25:55.620158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.566 [2024-11-25 12:25:55.620167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.566 [2024-11-25 12:25:55.620174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.566 [2024-11-25 12:25:55.620182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.566 [2024-11-25 12:25:55.620193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.566 [2024-11-25 12:25:55.620202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.566 [2024-11-25 12:25:55.620209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.566 [2024-11-25 12:25:55.620218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.566 [2024-11-25 12:25:55.620225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.566 [2024-11-25 12:25:55.620233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.566 [2024-11-25 12:25:55.620240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.566 [2024-11-25 12:25:55.620252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.566 [2024-11-25 12:25:55.620259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.566 [2024-11-25 12:25:55.620268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:8024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.566 [2024-11-25 12:25:55.620275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.566 [2024-11-25 12:25:55.620283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.566 [2024-11-25 12:25:55.620290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.566 [2024-11-25 12:25:55.620298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.566 [2024-11-25 12:25:55.620305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.566 [2024-11-25 12:25:55.620313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.566 [2024-11-25 12:25:55.620320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.566 [2024-11-25 12:25:55.620329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:8056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.566 [2024-11-25 12:25:55.620336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.566 [2024-11-25 12:25:55.620344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.566 [2024-11-25 12:25:55.620351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.566 [2024-11-25 12:25:55.620359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.566 [2024-11-25 12:25:55.620366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.566 [2024-11-25 12:25:55.620375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.566 [2024-11-25 12:25:55.620381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.566 [2024-11-25 12:25:55.620390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.566 [2024-11-25 12:25:55.620396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.566 [2024-11-25 12:25:55.620404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:8096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.566 [2024-11-25 12:25:55.620411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.566 [2024-11-25 12:25:55.620420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.566 [2024-11-25 12:25:55.620427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.566 [2024-11-25 12:25:55.620435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.566 [2024-11-25 12:25:55.620446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.566 [2024-11-25 12:25:55.620456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.566 [2024-11-25 12:25:55.620462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.566 [2024-11-25 12:25:55.620471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.566 [2024-11-25 12:25:55.620478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.566 [2024-11-25 12:25:55.620487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.566 [2024-11-25 12:25:55.620493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.566 [2024-11-25 12:25:55.620502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.566 [2024-11-25 12:25:55.620509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.566 [2024-11-25 12:25:55.620517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:8152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.566 [2024-11-25 12:25:55.620524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.566 [2024-11-25 12:25:55.620532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.566 [2024-11-25 12:25:55.620539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.566 [2024-11-25 12:25:55.620548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.566 [2024-11-25 12:25:55.620555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.566 [2024-11-25 12:25:55.620563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.566 [2024-11-25 12:25:55.620570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.566 [2024-11-25 12:25:55.620579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.566 [2024-11-25 12:25:55.620586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.566 [2024-11-25 12:25:55.620594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.566 [2024-11-25 12:25:55.620601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.566 [2024-11-25 12:25:55.620609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:8200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.566 [2024-11-25 12:25:55.620616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.566 [2024-11-25 12:25:55.620624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.566 [2024-11-25 12:25:55.620631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.566 [2024-11-25 12:25:55.620639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:8216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.566 [2024-11-25 12:25:55.620649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.566 [2024-11-25 12:25:55.620658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:8224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.566 [2024-11-25 12:25:55.620665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.566 [2024-11-25 12:25:55.620673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:8232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.567 [2024-11-25 12:25:55.620680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.567 [2024-11-25 12:25:55.620688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.567 [2024-11-25 12:25:55.620696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.567 [2024-11-25 12:25:55.620705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.567 [2024-11-25 12:25:55.620712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.567 [2024-11-25 12:25:55.620721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:8256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.567 [2024-11-25 12:25:55.620727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.567 [2024-11-25 12:25:55.620736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.567 [2024-11-25 12:25:55.620743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.567 [2024-11-25 12:25:55.620758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.567 [2024-11-25 12:25:55.620766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.567 [2024-11-25 12:25:55.620774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.567 [2024-11-25 12:25:55.620781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.567 [2024-11-25 12:25:55.620790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.567 [2024-11-25 12:25:55.620797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.567 [2024-11-25 12:25:55.620805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.567 [2024-11-25 12:25:55.620812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.567 [2024-11-25 12:25:55.620820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:8304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.567 [2024-11-25 12:25:55.620828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.567 [2024-11-25 12:25:55.620836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.567 [2024-11-25 12:25:55.620843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.567 [2024-11-25 12:25:55.620854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.567 [2024-11-25 12:25:55.620861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.567 [2024-11-25 12:25:55.620870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.567 [2024-11-25 12:25:55.620877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.567 [2024-11-25 12:25:55.620885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.567 [2024-11-25 12:25:55.620892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.567 [2024-11-25 12:25:55.620900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:8344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.567 [2024-11-25 12:25:55.620907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.567 [2024-11-25 12:25:55.620915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.567 [2024-11-25 12:25:55.620922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.567 [2024-11-25 12:25:55.620930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.567 [2024-11-25 12:25:55.620936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.567 [2024-11-25 12:25:55.620944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.567 [2024-11-25 12:25:55.620952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.567 [2024-11-25 12:25:55.620960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.567 [2024-11-25 12:25:55.620967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.567 [2024-11-25 12:25:55.620975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.567 [2024-11-25 12:25:55.620982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.567 [2024-11-25 12:25:55.620990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:8392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.567 [2024-11-25 12:25:55.620997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.567 [2024-11-25 12:25:55.621005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.567 [2024-11-25 12:25:55.621012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.567 [2024-11-25 12:25:55.621020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:8408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.567 [2024-11-25 12:25:55.621027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.567 [2024-11-25 12:25:55.621036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.567 [2024-11-25 12:25:55.621046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.567 [2024-11-25 12:25:55.621054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.567 [2024-11-25 12:25:55.621061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.567 [2024-11-25 12:25:55.621070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.567 [2024-11-25 12:25:55.621076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.567 [2024-11-25 12:25:55.621085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.567 [2024-11-25 12:25:55.621091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.567 [2024-11-25 12:25:55.621099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:8448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.567 [2024-11-25 12:25:55.621106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.567 [2024-11-25 12:25:55.621115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.567 [2024-11-25 12:25:55.621121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.567 [2024-11-25 12:25:55.621129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:8464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.567 [2024-11-25 12:25:55.621136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.567 [2024-11-25 12:25:55.621145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:8472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.567 [2024-11-25 12:25:55.621152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.567 [2024-11-25 12:25:55.621160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.567 [2024-11-25 12:25:55.621167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.567 [2024-11-25 12:25:55.621175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.567 [2024-11-25 12:25:55.621182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.567 [2024-11-25 12:25:55.621190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.567 [2024-11-25 12:25:55.621198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.567 [2024-11-25 12:25:55.621206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.567 [2024-11-25 12:25:55.621213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.567 [2024-11-25 12:25:55.621222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:8512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.567 [2024-11-25 12:25:55.621228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.567 [2024-11-25 12:25:55.621237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:8520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.567 [2024-11-25 12:25:55.621247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.567 [2024-11-25 12:25:55.621256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.567 [2024-11-25 12:25:55.621263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.567 [2024-11-25 12:25:55.621271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:8536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.567 [2024-11-25 12:25:55.621278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.567 [2024-11-25 12:25:55.621287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.567 [2024-11-25 12:25:55.621294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.567 [2024-11-25 12:25:55.621302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.568 [2024-11-25 12:25:55.621309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.568 [2024-11-25 12:25:55.621318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.568 [2024-11-25 12:25:55.621325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.568 [2024-11-25 12:25:55.621333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.568 [2024-11-25 12:25:55.621340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.568 [2024-11-25 12:25:55.621348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.568 [2024-11-25 12:25:55.621355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.568 [2024-11-25 12:25:55.621363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.568 [2024-11-25 12:25:55.621370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.568 [2024-11-25 12:25:55.621378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.568 [2024-11-25 12:25:55.621385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.568 [2024-11-25 12:25:55.621394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.568 [2024-11-25 12:25:55.621400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.568 [2024-11-25 12:25:55.621409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.568 [2024-11-25 12:25:55.621416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.568 [2024-11-25 12:25:55.621424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:8616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.568 [2024-11-25 12:25:55.621431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.568 [2024-11-25 12:25:55.621442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.568 [2024-11-25 12:25:55.621449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.568 [2024-11-25 12:25:55.621461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.568 [2024-11-25 12:25:55.621468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.568 [2024-11-25 12:25:55.621476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.568 [2024-11-25 12:25:55.621483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.568 [2024-11-25 12:25:55.621492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:8648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.568 [2024-11-25 12:25:55.621498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.568 [2024-11-25 12:25:55.621507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.568 [2024-11-25 12:25:55.621514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.568 [2024-11-25 12:25:55.621522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:8664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.568 [2024-11-25 12:25:55.621529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.568 [2024-11-25 12:25:55.621537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.568 [2024-11-25 12:25:55.621544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.568 [2024-11-25 12:25:55.621553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:8680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.568 [2024-11-25 12:25:55.621560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.568 [2024-11-25 12:25:55.621568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:8688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.568 [2024-11-25 12:25:55.621575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.568 [2024-11-25 12:25:55.621583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.568 [2024-11-25 12:25:55.621590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.568 [2024-11-25 12:25:55.621598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.568 [2024-11-25 12:25:55.621605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.568 [2024-11-25 12:25:55.621613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:8712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.568 [2024-11-25 12:25:55.621620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.568 [2024-11-25 12:25:55.621628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:8720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.568 [2024-11-25 12:25:55.621642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.568 [2024-11-25 12:25:55.621651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.568 [2024-11-25 12:25:55.621658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.568 [2024-11-25 12:25:55.621666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:8736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:06.568 [2024-11-25 12:25:55.621673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.568 [2024-11-25 12:25:55.621681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11be650 is same with the state(6) to be set 00:27:06.568 [2024-11-25 12:25:55.621690] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:06.568 [2024-11-25 12:25:55.621697] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:06.568 [2024-11-25 12:25:55.621702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8744 len:8 PRP1 0x0 PRP2 0x0 00:27:06.568 [2024-11-25 12:25:55.621710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.568 [2024-11-25 12:25:55.621745] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:27:06.568 [2024-11-25 12:25:55.621786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.568 [2024-11-25 12:25:55.621795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.568 [2024-11-25 12:25:55.621803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.568 [2024-11-25 12:25:55.621810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.568 [2024-11-25 12:25:55.621818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.568 [2024-11-25 12:25:55.621825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.568 [2024-11-25 12:25:55.621832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:06.568 [2024-11-25 12:25:55.621839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:06.568 [2024-11-25 12:25:55.621846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:27:06.568 [2024-11-25 12:25:55.624424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:27:06.568 [2024-11-25 12:25:55.624448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114ff30 (9): Bad file descriptor 00:27:06.568 [2024-11-25 12:25:55.650712] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:27:06.568 12317.00 IOPS, 48.11 MiB/s [2024-11-25T12:26:02.444Z] 12122.64 IOPS, 47.35 MiB/s [2024-11-25T12:26:02.444Z] 11942.17 IOPS, 46.65 MiB/s [2024-11-25T12:26:02.444Z] 11801.38 IOPS, 46.10 MiB/s [2024-11-25T12:26:02.444Z] 11692.36 IOPS, 45.67 MiB/s [2024-11-25T12:26:02.444Z] 11577.60 IOPS, 45.23 MiB/s 00:27:06.568 Latency(us) 00:27:06.568 [2024-11-25T12:26:02.444Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:06.568 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:06.568 Verification LBA range: start 0x0 length 0x4000 00:27:06.568 NVMe0n1 : 15.01 11577.34 45.22 298.32 0.00 10756.55 393.85 19963.27 00:27:06.568 [2024-11-25T12:26:02.444Z] =================================================================================================================== 00:27:06.568 [2024-11-25T12:26:02.444Z] Total : 11577.34 45.22 298.32 0.00 10756.55 393.85 19963.27 00:27:06.568 Received shutdown signal, test time was about 15.000000 seconds 00:27:06.568 00:27:06.568 Latency(us) 00:27:06.568 [2024-11-25T12:26:02.444Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:06.568 [2024-11-25T12:26:02.444Z] =================================================================================================================== 00:27:06.568 [2024-11-25T12:26:02.444Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:06.568 12:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:27:06.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:06.568 12:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:27:06.568 12:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:27:06.568 12:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=87907 00:27:06.568 12:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 87907 /var/tmp/bdevperf.sock 00:27:06.568 12:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:27:06.569 12:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 87907 ']' 00:27:06.569 12:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:06.569 12:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:06.569 12:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:06.569 12:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:06.569 12:26:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:06.826 12:26:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:06.826 12:26:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:27:06.826 12:26:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:27:07.084 [2024-11-25 12:26:02.851289] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:27:07.084 12:26:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:27:07.341 [2024-11-25 12:26:03.059437] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:27:07.341 12:26:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:07.598 NVMe0n1 00:27:07.598 12:26:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:07.856 00:27:07.856 12:26:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:27:08.113 00:27:08.113 12:26:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:08.113 12:26:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:27:08.371 12:26:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:08.628 12:26:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:27:11.966 12:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:11.966 12:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:27:11.966 12:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=88044 00:27:11.967 12:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:11.967 12:26:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 88044 00:27:12.898 { 00:27:12.899 "results": [ 00:27:12.899 { 00:27:12.899 "job": "NVMe0n1", 00:27:12.899 "core_mask": "0x1", 00:27:12.899 "workload": "verify", 00:27:12.899 "status": "finished", 00:27:12.899 "verify_range": { 00:27:12.899 "start": 0, 00:27:12.899 "length": 16384 00:27:12.899 }, 00:27:12.899 "queue_depth": 128, 00:27:12.899 "io_size": 4096, 00:27:12.899 "runtime": 1.008365, 00:27:12.899 "iops": 13168.842631388436, 00:27:12.899 "mibps": 51.44079152886108, 00:27:12.899 "io_failed": 0, 00:27:12.899 "io_timeout": 0, 00:27:12.899 "avg_latency_us": 9677.55910072005, 00:27:12.899 "min_latency_us": 1569.083076923077, 00:27:12.899 "max_latency_us": 14115.446153846155 00:27:12.899 } 00:27:12.899 ], 00:27:12.899 "core_count": 1 00:27:12.899 } 00:27:12.899 12:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:27:12.899 [2024-11-25 12:26:01.781283] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:27:12.899 [2024-11-25 12:26:01.781349] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87907 ] 00:27:12.899 [2024-11-25 12:26:01.912337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.899 [2024-11-25 12:26:01.943761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:12.899 [2024-11-25 12:26:04.363228] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:27:12.899 [2024-11-25 12:26:04.363306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.899 [2024-11-25 12:26:04.363318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.899 [2024-11-25 12:26:04.363327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.899 [2024-11-25 12:26:04.363334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.899 [2024-11-25 12:26:04.363342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.899 [2024-11-25 12:26:04.363348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.899 [2024-11-25 12:26:04.363356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:12.899 [2024-11-25 12:26:04.363362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:12.899 [2024-11-25 12:26:04.363369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:27:12.899 [2024-11-25 12:26:04.363397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:27:12.899 [2024-11-25 12:26:04.363411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f0f30 (9): Bad file descriptor 00:27:12.899 [2024-11-25 12:26:04.373593] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:27:12.899 Running I/O for 1 seconds... 00:27:12.899 13151.00 IOPS, 51.37 MiB/s 00:27:12.899 Latency(us) 00:27:12.899 [2024-11-25T12:26:08.775Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:12.899 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:12.899 Verification LBA range: start 0x0 length 0x4000 00:27:12.899 NVMe0n1 : 1.01 13168.84 51.44 0.00 0.00 9677.56 1569.08 14115.45 00:27:12.899 [2024-11-25T12:26:08.775Z] =================================================================================================================== 00:27:12.899 [2024-11-25T12:26:08.775Z] Total : 13168.84 51.44 0.00 0.00 9677.56 1569.08 14115.45 00:27:12.899 12:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:12.899 12:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:27:13.157 12:26:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:13.415 12:26:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:13.415 12:26:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:27:13.673 12:26:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:13.931 12:26:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:27:17.213 12:26:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:17.213 12:26:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:27:17.213 12:26:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 87907 00:27:17.213 12:26:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 87907 ']' 00:27:17.213 12:26:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 87907 00:27:17.213 12:26:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:27:17.213 12:26:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:17.213 12:26:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87907 00:27:17.213 killing process with pid 87907 00:27:17.213 12:26:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:17.213 12:26:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:17.213 12:26:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87907' 00:27:17.213 12:26:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 87907 00:27:17.213 12:26:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 87907 00:27:17.213 12:26:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:27:17.213 12:26:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:17.471 12:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:27:17.471 12:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:27:17.471 12:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:27:17.471 12:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:17.471 12:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:27:17.471 12:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:17.471 12:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:27:17.471 12:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:17.471 12:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:17.471 rmmod nvme_tcp 00:27:17.471 rmmod nvme_fabrics 00:27:17.471 rmmod nvme_keyring 00:27:17.471 12:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:17.471 12:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:27:17.471 12:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:27:17.471 12:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 87539 ']' 00:27:17.471 12:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 87539 00:27:17.471 12:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 87539 ']' 00:27:17.471 12:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 87539 00:27:17.471 12:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:27:17.471 12:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:17.471 12:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87539 00:27:17.471 killing process with pid 87539 00:27:17.471 12:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:17.471 12:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:17.471 12:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87539' 00:27:17.471 12:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 87539 00:27:17.471 12:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 87539 00:27:17.730 12:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:17.730 12:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:17.730 12:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:17.730 12:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:27:17.730 12:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:27:17.730 12:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:17.730 12:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:27:17.730 12:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:17.730 12:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:27:17.730 12:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:27:17.730 12:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:27:17.730 12:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:27:17.730 12:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:27:17.730 12:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:27:17.730 12:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:27:17.730 12:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:27:17.730 12:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:27:17.730 12:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:27:17.730 12:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:27:17.730 12:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:27:17.730 12:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:17.730 12:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:17.730 12:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:27:17.730 12:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:17.730 12:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:17.730 12:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:17.730 12:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:27:17.730 00:27:17.730 real 0m31.087s 00:27:17.730 user 2m1.368s 00:27:17.730 sys 0m3.332s 00:27:17.730 12:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:17.730 12:26:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:27:17.730 ************************************ 00:27:17.730 END TEST nvmf_failover 00:27:17.730 ************************************ 00:27:17.730 12:26:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:27:17.730 12:26:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:17.730 12:26:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:17.730 12:26:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.989 ************************************ 00:27:17.989 START TEST nvmf_host_discovery 00:27:17.989 ************************************ 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:27:17.990 * Looking for test storage... 00:27:17.990 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:17.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:17.990 --rc genhtml_branch_coverage=1 00:27:17.990 --rc genhtml_function_coverage=1 00:27:17.990 --rc genhtml_legend=1 00:27:17.990 --rc geninfo_all_blocks=1 00:27:17.990 --rc geninfo_unexecuted_blocks=1 00:27:17.990 00:27:17.990 ' 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:17.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:17.990 --rc genhtml_branch_coverage=1 00:27:17.990 --rc genhtml_function_coverage=1 00:27:17.990 --rc genhtml_legend=1 00:27:17.990 --rc geninfo_all_blocks=1 00:27:17.990 --rc geninfo_unexecuted_blocks=1 00:27:17.990 00:27:17.990 ' 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:17.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:17.990 --rc genhtml_branch_coverage=1 00:27:17.990 --rc genhtml_function_coverage=1 00:27:17.990 --rc genhtml_legend=1 00:27:17.990 --rc geninfo_all_blocks=1 00:27:17.990 --rc geninfo_unexecuted_blocks=1 00:27:17.990 00:27:17.990 ' 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:17.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:17.990 --rc genhtml_branch_coverage=1 00:27:17.990 --rc genhtml_function_coverage=1 00:27:17.990 --rc genhtml_legend=1 00:27:17.990 --rc geninfo_all_blocks=1 00:27:17.990 --rc geninfo_unexecuted_blocks=1 00:27:17.990 00:27:17.990 ' 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:17.990 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:17.990 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:17.991 Cannot find device "nvmf_init_br" 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:17.991 Cannot find device "nvmf_init_br2" 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:17.991 Cannot find device "nvmf_tgt_br" 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:17.991 Cannot find device "nvmf_tgt_br2" 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:17.991 Cannot find device "nvmf_init_br" 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:17.991 Cannot find device "nvmf_init_br2" 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:17.991 Cannot find device "nvmf_tgt_br" 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:17.991 Cannot find device "nvmf_tgt_br2" 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:17.991 Cannot find device "nvmf_br" 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:17.991 Cannot find device "nvmf_init_if" 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:17.991 Cannot find device "nvmf_init_if2" 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:17.991 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:17.991 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:27:17.991 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:18.263 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:18.263 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:18.263 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:18.263 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:18.263 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:18.263 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:18.263 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:18.263 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:18.263 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:18.264 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:18.264 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:18.264 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:18.264 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:18.264 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:18.264 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:18.264 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:18.264 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:18.264 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:18.264 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:18.264 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:18.264 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:18.264 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:18.264 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:18.264 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:18.264 12:26:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:18.264 12:26:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:18.264 12:26:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:27:18.264 12:26:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:27:18.264 12:26:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:27:18.264 12:26:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:18.264 12:26:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:18.264 12:26:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:27:18.264 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:18.264 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:27:18.264 00:27:18.264 --- 10.0.0.3 ping statistics --- 00:27:18.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.264 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:27:18.264 12:26:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:27:18.264 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:18.264 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.037 ms 00:27:18.264 00:27:18.264 --- 10.0.0.4 ping statistics --- 00:27:18.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.264 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:27:18.264 12:26:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:18.264 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:18.264 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:27:18.264 00:27:18.264 --- 10.0.0.1 ping statistics --- 00:27:18.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.264 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:27:18.264 12:26:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:27:18.264 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:18.264 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:27:18.264 00:27:18.264 --- 10.0.0.2 ping statistics --- 00:27:18.264 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.264 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:27:18.264 12:26:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:18.264 12:26:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:27:18.264 12:26:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:18.264 12:26:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:18.264 12:26:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:18.264 12:26:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:18.264 12:26:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:18.264 12:26:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:18.264 12:26:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:18.264 12:26:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:27:18.264 12:26:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:18.264 12:26:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:18.264 12:26:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:18.264 12:26:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=88392 00:27:18.264 12:26:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 88392 00:27:18.264 12:26:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 88392 ']' 00:27:18.264 12:26:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:18.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:18.264 12:26:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:18.264 12:26:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:18.264 12:26:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:18.264 12:26:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:18.264 12:26:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:18.264 [2024-11-25 12:26:14.082796] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:27:18.264 [2024-11-25 12:26:14.082852] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:18.522 [2024-11-25 12:26:14.222903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:18.522 [2024-11-25 12:26:14.257141] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:18.522 [2024-11-25 12:26:14.257178] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:18.522 [2024-11-25 12:26:14.257184] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:18.522 [2024-11-25 12:26:14.257189] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:18.522 [2024-11-25 12:26:14.257193] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:18.522 [2024-11-25 12:26:14.257447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:19.086 12:26:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:19.086 12:26:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:27:19.086 12:26:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:19.086 12:26:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:19.086 12:26:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:19.344 12:26:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:19.344 12:26:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:19.344 12:26:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.344 12:26:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:19.344 [2024-11-25 12:26:14.988078] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:19.344 12:26:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.344 12:26:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:27:19.344 12:26:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.344 12:26:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:19.344 [2024-11-25 12:26:14.996157] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:27:19.344 12:26:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.344 12:26:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:27:19.344 12:26:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.344 12:26:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:19.344 null0 00:27:19.344 12:26:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.344 12:26:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:27:19.344 12:26:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.344 12:26:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:19.344 null1 00:27:19.344 12:26:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.344 12:26:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:27:19.344 12:26:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.344 12:26:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:19.344 12:26:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.344 12:26:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=88443 00:27:19.344 12:26:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 88443 /tmp/host.sock 00:27:19.344 12:26:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:27:19.344 12:26:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 88443 ']' 00:27:19.344 12:26:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:27:19.344 12:26:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:19.344 12:26:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:19.344 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:19.344 12:26:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:19.344 12:26:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:19.344 [2024-11-25 12:26:15.064918] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:27:19.344 [2024-11-25 12:26:15.064977] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88443 ] 00:27:19.344 [2024-11-25 12:26:15.204041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.602 [2024-11-25 12:26:15.239917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:20.168 12:26:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:20.168 12:26:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:27:20.168 12:26:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:20.168 12:26:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:27:20.168 12:26:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.168 12:26:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.168 12:26:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.168 12:26:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:27:20.168 12:26:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.168 12:26:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.168 12:26:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.168 12:26:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:27:20.168 12:26:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:27:20.168 12:26:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:20.168 12:26:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:20.168 12:26:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:20.168 12:26:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:20.168 12:26:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.168 12:26:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.168 12:26:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.168 12:26:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:27:20.168 12:26:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:27:20.168 12:26:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:20.168 12:26:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.168 12:26:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.168 12:26:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:20.168 12:26:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:20.168 12:26:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:20.168 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.168 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:27:20.168 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:27:20.168 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.168 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.168 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.168 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.426 [2024-11-25 12:26:16.188425] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:20.426 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:27:20.427 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:20.427 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:20.427 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:27:20.427 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.427 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.427 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.685 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:20.685 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:20.685 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:20.685 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:20.685 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:20.685 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:27:20.685 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:20.685 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.685 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:20.685 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:20.685 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:20.685 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:20.685 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.685 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:27:20.685 12:26:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:27:21.250 [2024-11-25 12:26:16.960459] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:27:21.250 [2024-11-25 12:26:16.960483] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:27:21.250 [2024-11-25 12:26:16.960493] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:21.250 [2024-11-25 12:26:17.046563] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:27:21.250 [2024-11-25 12:26:17.100831] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:27:21.250 [2024-11-25 12:26:17.101348] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1a1bba0:1 started. 00:27:21.250 [2024-11-25 12:26:17.102613] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:27:21.250 [2024-11-25 12:26:17.102632] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:27:21.250 [2024-11-25 12:26:17.109172] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1a1bba0 was disconnected and freed. delete nvme_qpair. 00:27:21.508 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:21.508 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:21.508 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:27:21.508 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:21.508 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:21.508 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:21.508 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.508 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:21.508 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.508 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.508 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.508 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:21.508 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:27:21.508 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:27:21.508 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:21.508 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:21.508 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:27:21.508 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:27:21.508 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:21.508 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:21.508 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.508 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.508 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:21.508 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:21.768 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.768 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:27:21.768 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:21.769 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:27:21.769 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:27:21.769 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:21.769 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:21.769 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:27:21.769 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:27:21.769 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:21.769 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:21.769 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:21.769 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.769 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.769 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:21.769 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.769 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:27:21.769 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:21.769 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:27:21.769 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:27:21.769 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:21.769 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:21.769 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:21.769 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:21.769 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:21.769 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:21.770 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:21.770 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.770 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:21.770 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.770 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.770 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:27:21.770 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:27:21.770 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:21.770 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:21.770 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:27:21.770 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.770 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.770 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.770 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:21.770 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:21.770 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:21.770 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:21.770 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:21.770 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:27:21.770 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:21.770 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:21.770 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.770 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.771 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:21.771 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:21.771 [2024-11-25 12:26:17.481699] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1996580:1 started. 00:27:21.771 [2024-11-25 12:26:17.489158] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1996580 was disconnected and freed. delete nvme_qpair. 00:27:21.771 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.771 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:21.771 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:21.771 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:27:21.771 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:27:21.771 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:21.771 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:21.771 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:21.771 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:21.771 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:21.771 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:21.771 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:27:21.771 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:21.771 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.771 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.771 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.771 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:27:21.771 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:21.771 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:21.771 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:21.771 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:27:21.771 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.771 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.771 [2024-11-25 12:26:17.548799] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:27:21.771 [2024-11-25 12:26:17.549169] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:27:21.772 [2024-11-25 12:26:17.549189] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:21.772 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.772 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:21.772 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:21.772 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:21.772 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:21.772 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:21.772 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:27:21.772 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:21.772 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:21.772 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:21.772 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:21.772 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.772 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.772 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.772 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.772 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:21.772 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:21.772 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:21.772 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:21.772 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:21.772 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:21.772 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:27:21.772 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:21.772 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:21.772 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.772 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.773 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:21.773 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:21.773 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:21.773 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:21.773 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:21.773 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:27:21.773 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:27:21.773 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:21.773 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:21.773 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:27:21.773 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:27:21.773 [2024-11-25 12:26:17.635229] bdev_nvme.c:7403:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:27:21.773 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:21.773 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:21.773 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:21.773 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:21.773 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:21.773 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:22.038 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.038 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:27:22.038 12:26:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:27:22.038 [2024-11-25 12:26:17.694453] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:27:22.038 [2024-11-25 12:26:17.694488] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:27:22.038 [2024-11-25 12:26:17.694494] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:27:22.038 [2024-11-25 12:26:17.694497] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:27:22.973 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:22.973 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:27:22.973 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:27:22.973 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:22.973 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.973 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:22.973 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:22.973 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:22.973 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:22.973 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.973 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:27:22.973 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:22.973 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:27:22.973 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:22.973 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:22.973 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:22.973 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:22.973 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:22.973 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:22.973 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:22.973 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:22.973 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:22.973 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.973 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:22.973 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.973 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:22.973 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:22.973 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:22.973 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:22.973 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:22.973 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.973 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:22.973 [2024-11-25 12:26:18.742086] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:27:22.973 [2024-11-25 12:26:18.742111] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:22.973 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.973 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:22.973 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:22.973 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:22.974 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:22.974 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:22.974 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:27:22.974 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:22.974 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:22.974 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:22.974 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.974 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:22.974 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:22.974 [2024-11-25 12:26:18.750977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.974 [2024-11-25 12:26:18.751001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.974 [2024-11-25 12:26:18.751008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.974 [2024-11-25 12:26:18.751013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.974 [2024-11-25 12:26:18.751018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.974 [2024-11-25 12:26:18.751023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.974 [2024-11-25 12:26:18.751029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:22.974 [2024-11-25 12:26:18.751033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.974 [2024-11-25 12:26:18.751038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ee280 is same with the state(6) to be set 00:27:22.974 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.974 [2024-11-25 12:26:18.760950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ee280 (9): Bad file descriptor 00:27:22.974 [2024-11-25 12:26:18.770962] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:22.974 [2024-11-25 12:26:18.770975] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:22.974 [2024-11-25 12:26:18.770979] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:22.974 [2024-11-25 12:26:18.770983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:22.974 [2024-11-25 12:26:18.771005] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:22.974 [2024-11-25 12:26:18.771057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.974 [2024-11-25 12:26:18.771067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19ee280 with addr=10.0.0.3, port=4420 00:27:22.974 [2024-11-25 12:26:18.771072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ee280 is same with the state(6) to be set 00:27:22.974 [2024-11-25 12:26:18.771081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ee280 (9): Bad file descriptor 00:27:22.974 [2024-11-25 12:26:18.771093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:22.974 [2024-11-25 12:26:18.771097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:22.974 [2024-11-25 12:26:18.771103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:22.974 [2024-11-25 12:26:18.771107] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:22.974 [2024-11-25 12:26:18.771110] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:22.974 [2024-11-25 12:26:18.771113] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:22.974 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.974 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:22.974 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:22.974 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:22.974 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:22.974 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:22.974 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:22.974 [2024-11-25 12:26:18.781011] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:22.974 [2024-11-25 12:26:18.781025] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:22.974 [2024-11-25 12:26:18.781027] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:22.974 [2024-11-25 12:26:18.781030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:22.974 [2024-11-25 12:26:18.781047] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:22.974 [2024-11-25 12:26:18.781073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.974 [2024-11-25 12:26:18.781081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19ee280 with addr=10.0.0.3, port=4420 00:27:22.974 [2024-11-25 12:26:18.781085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ee280 is same with the state(6) to be set 00:27:22.974 [2024-11-25 12:26:18.781092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ee280 (9): Bad file descriptor 00:27:22.974 [2024-11-25 12:26:18.781103] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:22.974 [2024-11-25 12:26:18.781107] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:22.974 [2024-11-25 12:26:18.781112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:22.974 [2024-11-25 12:26:18.781116] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:22.974 [2024-11-25 12:26:18.781118] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:22.974 [2024-11-25 12:26:18.781121] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:22.974 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:27:22.974 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:22.974 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:22.974 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:22.974 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:22.974 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.974 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:22.974 [2024-11-25 12:26:18.791055] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:22.974 [2024-11-25 12:26:18.791073] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:22.974 [2024-11-25 12:26:18.791076] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:22.974 [2024-11-25 12:26:18.791079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:22.974 [2024-11-25 12:26:18.791095] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:22.974 [2024-11-25 12:26:18.791129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.974 [2024-11-25 12:26:18.791138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19ee280 with addr=10.0.0.3, port=4420 00:27:22.975 [2024-11-25 12:26:18.791143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ee280 is same with the state(6) to be set 00:27:22.975 [2024-11-25 12:26:18.791155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ee280 (9): Bad file descriptor 00:27:22.975 [2024-11-25 12:26:18.791162] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:22.975 [2024-11-25 12:26:18.791166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:22.975 [2024-11-25 12:26:18.791170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:22.975 [2024-11-25 12:26:18.791174] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:22.975 [2024-11-25 12:26:18.791177] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:22.975 [2024-11-25 12:26:18.791179] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:22.975 [2024-11-25 12:26:18.801100] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:22.975 [2024-11-25 12:26:18.801115] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:22.975 [2024-11-25 12:26:18.801117] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:22.975 [2024-11-25 12:26:18.801119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:22.975 [2024-11-25 12:26:18.801132] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:22.975 [2024-11-25 12:26:18.801160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.975 [2024-11-25 12:26:18.801167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19ee280 with addr=10.0.0.3, port=4420 00:27:22.975 [2024-11-25 12:26:18.801172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ee280 is same with the state(6) to be set 00:27:22.975 [2024-11-25 12:26:18.801180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ee280 (9): Bad file descriptor 00:27:22.975 [2024-11-25 12:26:18.801187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:22.975 [2024-11-25 12:26:18.801190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:22.975 [2024-11-25 12:26:18.801195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:22.975 [2024-11-25 12:26:18.801199] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:22.975 [2024-11-25 12:26:18.801201] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:22.975 [2024-11-25 12:26:18.801203] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:22.975 [2024-11-25 12:26:18.811137] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:22.975 [2024-11-25 12:26:18.811151] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:22.975 [2024-11-25 12:26:18.811154] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:22.975 [2024-11-25 12:26:18.811156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:22.975 [2024-11-25 12:26:18.811169] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:22.975 [2024-11-25 12:26:18.811193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.975 [2024-11-25 12:26:18.811200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19ee280 with addr=10.0.0.3, port=4420 00:27:22.975 [2024-11-25 12:26:18.811205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ee280 is same with the state(6) to be set 00:27:22.975 [2024-11-25 12:26:18.811212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ee280 (9): Bad file descriptor 00:27:22.975 [2024-11-25 12:26:18.811219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:22.975 [2024-11-25 12:26:18.811223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:22.975 [2024-11-25 12:26:18.811227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:22.975 [2024-11-25 12:26:18.811231] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:22.975 [2024-11-25 12:26:18.811234] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:22.975 [2024-11-25 12:26:18.811236] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:22.975 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.975 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:22.975 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:22.975 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:27:22.975 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:27:22.975 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:22.975 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:22.975 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:27:22.975 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:27:22.975 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:22.975 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:22.975 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:22.975 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:22.975 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.975 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:22.975 [2024-11-25 12:26:18.821174] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:22.975 [2024-11-25 12:26:18.821187] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:22.975 [2024-11-25 12:26:18.821189] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:22.975 [2024-11-25 12:26:18.821192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:22.975 [2024-11-25 12:26:18.821206] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:22.975 [2024-11-25 12:26:18.821228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:22.975 [2024-11-25 12:26:18.821235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19ee280 with addr=10.0.0.3, port=4420 00:27:22.975 [2024-11-25 12:26:18.821240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ee280 is same with the state(6) to be set 00:27:22.975 [2024-11-25 12:26:18.821247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ee280 (9): Bad file descriptor 00:27:22.975 [2024-11-25 12:26:18.821253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:22.975 [2024-11-25 12:26:18.821257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:22.975 [2024-11-25 12:26:18.821261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:22.975 [2024-11-25 12:26:18.821265] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:22.975 [2024-11-25 12:26:18.821267] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:22.975 [2024-11-25 12:26:18.821270] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:22.975 [2024-11-25 12:26:18.828156] bdev_nvme.c:7266:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:27:22.975 [2024-11-25 12:26:18.828175] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:27:22.975 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.234 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:27:23.234 12:26:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:27:24.242 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:24.242 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:27:24.242 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:27:24.242 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:24.242 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:24.242 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.242 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:24.242 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:27:24.242 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:27:24.242 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.242 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:27:24.242 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:24.242 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:27:24.242 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:27:24.242 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:24.242 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:24.242 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:24.242 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:24.242 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:24.242 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:24.242 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:24.242 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:24.242 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.242 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:24.242 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.242 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:27:24.242 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:27:24.242 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:24.242 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:24.242 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:27:24.242 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.242 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:24.242 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.242 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:27:24.242 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:27:24.242 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:24.242 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:24.242 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:27:24.242 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:27:24.242 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:27:24.242 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:24.242 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:24.242 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.242 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:24.242 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:27:24.242 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.242 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:27:24.242 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:24.243 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:27:24.243 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:27:24.243 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:24.243 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:24.243 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:27:24.243 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:27:24.243 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:24.243 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.243 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:24.243 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:24.243 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:24.243 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:24.243 12:26:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.243 12:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:27:24.243 12:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:24.243 12:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:27:24.243 12:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:27:24.243 12:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:24.243 12:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:24.243 12:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:24.243 12:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:24.243 12:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:24.243 12:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:24.243 12:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:24.243 12:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.243 12:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:24.243 12:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:27:24.243 12:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.243 12:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:27:24.243 12:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:27:24.243 12:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:24.243 12:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:24.243 12:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:24.243 12:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.243 12:26:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:25.617 [2024-11-25 12:26:21.061066] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:27:25.617 [2024-11-25 12:26:21.061098] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:27:25.617 [2024-11-25 12:26:21.061111] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:27:25.617 [2024-11-25 12:26:21.147156] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:27:25.617 [2024-11-25 12:26:21.205418] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:27:25.617 [2024-11-25 12:26:21.205815] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1a212f0:1 started. 00:27:25.617 [2024-11-25 12:26:21.207302] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:27:25.617 [2024-11-25 12:26:21.207344] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:27:25.617 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.617 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:25.617 [2024-11-25 12:26:21.209818] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1a212f0 was disconnected and freed. delete nvme_qpair. 00:27:25.617 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:27:25.617 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:25.617 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:25.617 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:25.617 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:25.617 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:25.617 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:25.617 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.617 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:25.617 2024/11/25 12:26:21 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.3 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:27:25.617 request: 00:27:25.617 { 00:27:25.617 "method": "bdev_nvme_start_discovery", 00:27:25.617 "params": { 00:27:25.617 "name": "nvme", 00:27:25.617 "trtype": "tcp", 00:27:25.617 "traddr": "10.0.0.3", 00:27:25.617 "adrfam": "ipv4", 00:27:25.617 "trsvcid": "8009", 00:27:25.617 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:25.617 "wait_for_attach": true 00:27:25.617 } 00:27:25.617 } 00:27:25.617 Got JSON-RPC error response 00:27:25.617 GoRPCClient: error on JSON-RPC call 00:27:25.617 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:25.617 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:27:25.617 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:25.617 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:25.617 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:25.617 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:27:25.617 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:25.617 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:25.617 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:25.617 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.617 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:25.617 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:25.617 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.617 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:27:25.617 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:27:25.617 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:25.617 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:25.617 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:25.617 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:25.618 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.618 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:25.618 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.618 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:25.618 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:25.618 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:27:25.618 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:25.618 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:25.618 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:25.618 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:25.618 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:25.618 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:25.618 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.618 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:25.618 2024/11/25 12:26:21 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.3 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:27:25.618 request: 00:27:25.618 { 00:27:25.618 "method": "bdev_nvme_start_discovery", 00:27:25.618 "params": { 00:27:25.618 "name": "nvme_second", 00:27:25.618 "trtype": "tcp", 00:27:25.618 "traddr": "10.0.0.3", 00:27:25.618 "adrfam": "ipv4", 00:27:25.618 "trsvcid": "8009", 00:27:25.618 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:25.618 "wait_for_attach": true 00:27:25.618 } 00:27:25.618 } 00:27:25.618 Got JSON-RPC error response 00:27:25.618 GoRPCClient: error on JSON-RPC call 00:27:25.618 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:25.618 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:27:25.618 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:25.618 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:25.618 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:25.618 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:27:25.618 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:25.618 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.618 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:25.618 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:25.618 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:25.618 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:25.618 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.618 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:27:25.618 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:27:25.618 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:25.618 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.618 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:25.618 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:27:25.618 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:27:25.618 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:25.618 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.618 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:25.618 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:25.618 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:27:25.618 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:25.618 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:25.618 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:25.618 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:25.618 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:25.618 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:25.618 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.618 12:26:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.553 [2024-11-25 12:26:22.388285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.553 [2024-11-25 12:26:22.388335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19ed3a0 with addr=10.0.0.3, port=8010 00:27:26.553 [2024-11-25 12:26:22.388348] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:26.553 [2024-11-25 12:26:22.388355] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:26.553 [2024-11-25 12:26:22.388360] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:27:27.926 [2024-11-25 12:26:23.388276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:27.926 [2024-11-25 12:26:23.388321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19ed3a0 with addr=10.0.0.3, port=8010 00:27:27.926 [2024-11-25 12:26:23.388332] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:27.926 [2024-11-25 12:26:23.388337] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:27.926 [2024-11-25 12:26:23.388342] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:27:28.860 [2024-11-25 12:26:24.388203] bdev_nvme.c:7522:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:27:28.860 request: 00:27:28.860 { 00:27:28.860 "method": "bdev_nvme_start_discovery", 00:27:28.860 "params": { 00:27:28.860 "name": "nvme_second", 00:27:28.860 "trtype": "tcp", 00:27:28.860 "traddr": "10.0.0.3", 00:27:28.860 "adrfam": "ipv4", 00:27:28.860 "trsvcid": "8010", 00:27:28.860 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:28.860 "wait_for_attach": false, 00:27:28.860 "attach_timeout_ms": 3000 00:27:28.860 } 00:27:28.860 } 00:27:28.860 Got JSON-RPC error response 00:27:28.860 GoRPCClient: error on JSON-RPC call 00:27:28.861 2024/11/25 12:26:24 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.3 trsvcid:8010 trtype:tcp wait_for_attach:%!s(bool=false)], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:27:28.861 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:28.861 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:27:28.861 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:28.861 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:28.861 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:28.861 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:27:28.861 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:28.861 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:28.861 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.861 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:27:28.861 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:28.861 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:27:28.861 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.861 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:27:28.861 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:27:28.861 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 88443 00:27:28.861 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:27:28.861 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:28.861 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:27:28.861 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:28.861 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:27:28.861 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:28.861 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:28.861 rmmod nvme_tcp 00:27:28.861 rmmod nvme_fabrics 00:27:28.861 rmmod nvme_keyring 00:27:28.861 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:28.861 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:27:28.861 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:27:28.861 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 88392 ']' 00:27:28.861 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 88392 00:27:28.861 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 88392 ']' 00:27:28.861 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 88392 00:27:28.861 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:27:28.861 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:28.861 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88392 00:27:28.861 killing process with pid 88392 00:27:28.861 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:28.861 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:28.861 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88392' 00:27:28.861 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 88392 00:27:28.861 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 88392 00:27:28.861 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:28.861 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:28.861 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:28.861 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:27:28.861 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:27:28.861 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:27:28.861 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:28.861 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:28.861 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:27:28.861 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:27:28.861 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:27:28.861 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:27:28.861 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:27:29.119 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:27:29.119 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:27:29.119 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:27:29.119 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:27:29.119 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:27:29.119 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:27:29.119 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:27:29.119 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:29.119 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:29.120 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:27:29.120 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:29.120 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:29.120 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:29.120 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:27:29.120 00:27:29.120 real 0m11.325s 00:27:29.120 user 0m21.482s 00:27:29.120 sys 0m1.265s 00:27:29.120 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:29.120 ************************************ 00:27:29.120 END TEST nvmf_host_discovery 00:27:29.120 ************************************ 00:27:29.120 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:29.120 12:26:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:27:29.120 12:26:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:29.120 12:26:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:29.120 12:26:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.380 ************************************ 00:27:29.380 START TEST nvmf_host_multipath_status 00:27:29.380 ************************************ 00:27:29.380 12:26:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:27:29.380 * Looking for test storage... 00:27:29.380 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:29.380 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:29.380 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:27:29.380 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:29.380 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:29.380 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:29.380 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:29.380 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:29.380 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:27:29.380 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:27:29.380 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:27:29.380 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:27:29.380 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:27:29.380 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:27:29.380 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:27:29.380 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:29.380 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:27:29.380 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:27:29.380 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:29.380 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:29.380 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:27:29.380 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:27:29.380 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:29.380 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:27:29.380 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:27:29.380 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:27:29.380 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:27:29.380 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:29.380 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:27:29.380 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:27:29.380 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:29.380 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:29.380 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:27:29.380 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:29.380 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:29.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:29.380 --rc genhtml_branch_coverage=1 00:27:29.380 --rc genhtml_function_coverage=1 00:27:29.380 --rc genhtml_legend=1 00:27:29.380 --rc geninfo_all_blocks=1 00:27:29.380 --rc geninfo_unexecuted_blocks=1 00:27:29.380 00:27:29.380 ' 00:27:29.380 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:29.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:29.380 --rc genhtml_branch_coverage=1 00:27:29.380 --rc genhtml_function_coverage=1 00:27:29.380 --rc genhtml_legend=1 00:27:29.380 --rc geninfo_all_blocks=1 00:27:29.380 --rc geninfo_unexecuted_blocks=1 00:27:29.380 00:27:29.380 ' 00:27:29.380 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:29.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:29.380 --rc genhtml_branch_coverage=1 00:27:29.380 --rc genhtml_function_coverage=1 00:27:29.380 --rc genhtml_legend=1 00:27:29.380 --rc geninfo_all_blocks=1 00:27:29.380 --rc geninfo_unexecuted_blocks=1 00:27:29.380 00:27:29.380 ' 00:27:29.380 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:29.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:29.380 --rc genhtml_branch_coverage=1 00:27:29.380 --rc genhtml_function_coverage=1 00:27:29.380 --rc genhtml_legend=1 00:27:29.380 --rc geninfo_all_blocks=1 00:27:29.380 --rc geninfo_unexecuted_blocks=1 00:27:29.380 00:27:29.380 ' 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:29.381 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:29.381 Cannot find device "nvmf_init_br" 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:29.381 Cannot find device "nvmf_init_br2" 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:29.381 Cannot find device "nvmf_tgt_br" 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:29.381 Cannot find device "nvmf_tgt_br2" 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:29.381 Cannot find device "nvmf_init_br" 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:29.381 Cannot find device "nvmf_init_br2" 00:27:29.381 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:27:29.382 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:29.382 Cannot find device "nvmf_tgt_br" 00:27:29.382 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:27:29.382 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:29.640 Cannot find device "nvmf_tgt_br2" 00:27:29.640 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:27:29.640 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:29.640 Cannot find device "nvmf_br" 00:27:29.640 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:27:29.640 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:29.640 Cannot find device "nvmf_init_if" 00:27:29.640 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:27:29.640 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:29.640 Cannot find device "nvmf_init_if2" 00:27:29.640 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:27:29.640 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:29.640 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:29.640 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:27:29.640 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:29.640 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:29.640 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:27:29.640 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:29.640 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:29.640 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:29.640 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:29.640 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:29.640 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:29.640 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:29.640 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:29.640 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:29.640 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:29.640 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:29.640 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:29.640 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:29.640 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:29.640 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:29.640 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:29.640 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:29.640 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:29.640 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:29.640 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:29.640 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:29.640 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:29.640 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:29.640 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:29.640 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:29.640 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:29.640 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:29.640 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:27:29.640 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:27:29.640 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:27:29.640 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:29.640 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:29.640 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:27:29.640 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:29.640 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:27:29.640 00:27:29.640 --- 10.0.0.3 ping statistics --- 00:27:29.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:29.640 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:27:29.640 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:27:29.640 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:29.640 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:27:29.640 00:27:29.640 --- 10.0.0.4 ping statistics --- 00:27:29.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:29.641 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:27:29.641 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:29.641 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:29.641 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:27:29.641 00:27:29.641 --- 10.0.0.1 ping statistics --- 00:27:29.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:29.641 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:27:29.641 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:27:29.641 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:29.641 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:27:29.641 00:27:29.641 --- 10.0.0.2 ping statistics --- 00:27:29.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:29.641 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:27:29.641 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:29.641 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:27:29.641 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:29.641 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:29.641 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:29.641 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:29.641 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:29.641 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:29.641 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:29.899 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:27:29.899 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:29.899 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:29.899 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:29.899 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=88988 00:27:29.899 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:27:29.899 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 88988 00:27:29.899 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 88988 ']' 00:27:29.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:29.899 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:29.899 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:29.899 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:29.899 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:29.899 12:26:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:29.899 [2024-11-25 12:26:25.564789] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:27:29.899 [2024-11-25 12:26:25.564847] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:29.899 [2024-11-25 12:26:25.698946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:29.899 [2024-11-25 12:26:25.735707] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:29.899 [2024-11-25 12:26:25.735764] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:29.899 [2024-11-25 12:26:25.735771] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:29.899 [2024-11-25 12:26:25.735776] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:29.899 [2024-11-25 12:26:25.735781] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:29.899 [2024-11-25 12:26:25.736464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:29.899 [2024-11-25 12:26:25.736484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:30.831 12:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:30.831 12:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:27:30.831 12:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:30.831 12:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:30.831 12:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:30.831 12:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:30.831 12:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=88988 00:27:30.831 12:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:30.831 [2024-11-25 12:26:26.629191] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:30.831 12:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:31.089 Malloc0 00:27:31.089 12:26:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:27:31.347 12:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:31.347 12:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:31.606 [2024-11-25 12:26:27.401712] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:31.606 12:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:27:31.919 [2024-11-25 12:26:27.601849] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:27:31.919 12:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=89086 00:27:31.919 12:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:31.919 12:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 89086 /var/tmp/bdevperf.sock 00:27:31.919 12:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:27:31.919 12:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 89086 ']' 00:27:31.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:31.920 12:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:31.920 12:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:31.920 12:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:31.920 12:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:31.920 12:26:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:32.853 12:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:32.853 12:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:27:32.853 12:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:27:33.110 12:26:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:33.369 Nvme0n1 00:27:33.369 12:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:33.626 Nvme0n1 00:27:33.626 12:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:27:33.626 12:26:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:27:36.154 12:26:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:27:36.154 12:26:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:27:36.154 12:26:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:27:36.154 12:26:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:27:37.088 12:26:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:27:37.088 12:26:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:37.088 12:26:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:37.088 12:26:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:37.346 12:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:37.346 12:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:37.346 12:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:37.346 12:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:37.604 12:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:37.604 12:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:37.604 12:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:37.604 12:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:37.863 12:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:37.863 12:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:37.863 12:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:37.863 12:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:37.863 12:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:37.863 12:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:37.863 12:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:37.863 12:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:38.121 12:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:38.121 12:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:38.121 12:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:38.121 12:26:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:38.379 12:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:38.379 12:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:27:38.380 12:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:27:38.638 12:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:27:38.897 12:26:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:27:39.830 12:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:27:39.830 12:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:39.830 12:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:39.830 12:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:40.089 12:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:40.089 12:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:40.089 12:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:40.089 12:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:40.347 12:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:40.347 12:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:40.347 12:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:40.347 12:26:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:40.347 12:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:40.347 12:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:40.347 12:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:40.347 12:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:40.606 12:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:40.606 12:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:40.606 12:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:40.606 12:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:40.864 12:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:40.864 12:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:40.864 12:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:40.864 12:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:41.122 12:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:41.122 12:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:27:41.122 12:26:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:27:41.379 12:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:27:41.379 12:26:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:27:42.754 12:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:27:42.754 12:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:42.754 12:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:42.754 12:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:42.754 12:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:42.754 12:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:42.754 12:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:42.754 12:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:43.011 12:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:43.011 12:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:43.011 12:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:43.011 12:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:43.011 12:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:43.011 12:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:43.011 12:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:43.011 12:26:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:43.269 12:26:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:43.269 12:26:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:43.269 12:26:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:43.269 12:26:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:43.527 12:26:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:43.527 12:26:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:43.527 12:26:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:43.527 12:26:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:43.785 12:26:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:43.785 12:26:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:27:43.785 12:26:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:27:44.124 12:26:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:27:44.124 12:26:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:27:45.057 12:26:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:27:45.057 12:26:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:45.057 12:26:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:45.057 12:26:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:45.316 12:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:45.316 12:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:45.316 12:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:45.316 12:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:45.575 12:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:45.575 12:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:45.575 12:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:45.575 12:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:45.832 12:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:45.832 12:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:45.832 12:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:45.832 12:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:46.090 12:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:46.090 12:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:46.090 12:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:46.090 12:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:46.090 12:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:46.090 12:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:46.090 12:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:46.090 12:26:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:46.348 12:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:46.348 12:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:27:46.348 12:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:27:46.606 12:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:27:46.864 12:26:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:27:47.797 12:26:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:27:47.797 12:26:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:47.798 12:26:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:47.798 12:26:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:48.055 12:26:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:48.055 12:26:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:48.055 12:26:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:48.055 12:26:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:48.055 12:26:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:48.055 12:26:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:48.055 12:26:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:48.055 12:26:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:48.313 12:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:48.313 12:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:48.313 12:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:48.313 12:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:48.572 12:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:48.572 12:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:48.572 12:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:48.572 12:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:48.831 12:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:48.831 12:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:48.831 12:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:48.831 12:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:48.831 12:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:48.831 12:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:27:48.831 12:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:27:49.089 12:26:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:27:49.346 12:26:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:27:50.280 12:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:27:50.280 12:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:50.280 12:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:50.280 12:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:50.538 12:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:50.538 12:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:50.538 12:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:50.538 12:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:50.796 12:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:50.796 12:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:50.796 12:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:50.796 12:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:51.054 12:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:51.054 12:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:51.054 12:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:51.054 12:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:51.054 12:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:51.055 12:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:51.055 12:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:51.055 12:26:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:51.313 12:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:51.313 12:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:51.313 12:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:51.313 12:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:51.571 12:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:51.571 12:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:27:51.828 12:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:27:51.828 12:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:27:52.085 12:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:27:52.085 12:26:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:27:53.458 12:26:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:27:53.458 12:26:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:53.458 12:26:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:53.458 12:26:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:53.458 12:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:53.458 12:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:53.458 12:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:53.458 12:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:53.715 12:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:53.716 12:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:53.716 12:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:53.716 12:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:53.716 12:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:53.716 12:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:53.716 12:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:53.716 12:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:53.973 12:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:53.973 12:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:53.973 12:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:53.973 12:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:54.231 12:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:54.231 12:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:54.231 12:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:54.231 12:26:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:54.489 12:26:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:54.489 12:26:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:27:54.489 12:26:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:27:54.489 12:26:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:27:54.747 12:26:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:27:56.120 12:26:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:27:56.120 12:26:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:56.120 12:26:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:56.120 12:26:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:56.120 12:26:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:56.120 12:26:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:56.120 12:26:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:56.120 12:26:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:56.120 12:26:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:56.121 12:26:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:56.121 12:26:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:56.121 12:26:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:56.385 12:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:56.385 12:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:56.385 12:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:56.385 12:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:56.650 12:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:56.650 12:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:56.650 12:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:56.650 12:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:56.908 12:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:56.908 12:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:56.908 12:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:56.908 12:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:57.165 12:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:57.165 12:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:27:57.165 12:26:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:27:57.165 12:26:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:27:57.422 12:26:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:27:58.799 12:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:27:58.799 12:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:58.799 12:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:58.799 12:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:58.799 12:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:58.799 12:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:58.799 12:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:58.799 12:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:58.799 12:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:58.799 12:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:58.799 12:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:58.799 12:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:59.056 12:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:59.056 12:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:59.056 12:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:59.056 12:26:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:59.313 12:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:59.313 12:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:59.313 12:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:59.313 12:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:59.570 12:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:59.570 12:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:59.570 12:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:59.570 12:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:59.828 12:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:59.828 12:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:27:59.828 12:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:28:00.086 12:26:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:28:00.343 12:26:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:28:01.277 12:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:28:01.277 12:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:28:01.277 12:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:28:01.277 12:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:01.535 12:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:01.535 12:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:28:01.535 12:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:01.535 12:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:28:01.535 12:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:01.535 12:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:28:01.793 12:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:01.793 12:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:28:01.793 12:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:01.793 12:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:28:01.793 12:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:28:01.793 12:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:02.051 12:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:02.052 12:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:28:02.052 12:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:02.052 12:26:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:28:02.309 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:28:02.309 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:28:02.309 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:28:02.309 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:28:02.587 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:28:02.587 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 89086 00:28:02.587 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 89086 ']' 00:28:02.587 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 89086 00:28:02.587 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:28:02.587 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:02.587 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89086 00:28:02.587 killing process with pid 89086 00:28:02.587 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:28:02.587 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:28:02.587 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89086' 00:28:02.587 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 89086 00:28:02.587 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 89086 00:28:02.587 { 00:28:02.587 "results": [ 00:28:02.587 { 00:28:02.587 "job": "Nvme0n1", 00:28:02.587 "core_mask": "0x4", 00:28:02.587 "workload": "verify", 00:28:02.587 "status": "terminated", 00:28:02.587 "verify_range": { 00:28:02.587 "start": 0, 00:28:02.587 "length": 16384 00:28:02.587 }, 00:28:02.587 "queue_depth": 128, 00:28:02.587 "io_size": 4096, 00:28:02.587 "runtime": 28.743774, 00:28:02.587 "iops": 12412.496702764223, 00:28:02.587 "mibps": 48.48631524517275, 00:28:02.587 "io_failed": 0, 00:28:02.587 "io_timeout": 0, 00:28:02.587 "avg_latency_us": 10290.72604273327, 00:28:02.587 "min_latency_us": 354.46153846153845, 00:28:02.587 "max_latency_us": 3019898.88 00:28:02.587 } 00:28:02.587 ], 00:28:02.587 "core_count": 1 00:28:02.587 } 00:28:02.587 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 89086 00:28:02.587 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:28:02.587 [2024-11-25 12:26:27.660145] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:28:02.587 [2024-11-25 12:26:27.660220] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89086 ] 00:28:02.587 [2024-11-25 12:26:27.807859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:02.587 [2024-11-25 12:26:27.856350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:02.587 Running I/O for 90 seconds... 00:28:02.587 12036.00 IOPS, 47.02 MiB/s [2024-11-25T12:26:58.463Z] 12188.00 IOPS, 47.61 MiB/s [2024-11-25T12:26:58.463Z] 12308.00 IOPS, 48.08 MiB/s [2024-11-25T12:26:58.463Z] 12321.00 IOPS, 48.13 MiB/s [2024-11-25T12:26:58.463Z] 12374.20 IOPS, 48.34 MiB/s [2024-11-25T12:26:58.463Z] 12486.83 IOPS, 48.78 MiB/s [2024-11-25T12:26:58.463Z] 12584.14 IOPS, 49.16 MiB/s [2024-11-25T12:26:58.463Z] 12616.00 IOPS, 49.28 MiB/s [2024-11-25T12:26:58.463Z] 12626.22 IOPS, 49.32 MiB/s [2024-11-25T12:26:58.463Z] 12628.70 IOPS, 49.33 MiB/s [2024-11-25T12:26:58.463Z] 12614.36 IOPS, 49.27 MiB/s [2024-11-25T12:26:58.463Z] 12627.92 IOPS, 49.33 MiB/s [2024-11-25T12:26:58.463Z] [2024-11-25 12:26:42.323574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:115928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.587 [2024-11-25 12:26:42.323623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.587 [2024-11-25 12:26:42.323658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:116160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.587 [2024-11-25 12:26:42.323667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:02.587 [2024-11-25 12:26:42.323680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:116168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.587 [2024-11-25 12:26:42.323687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:02.587 [2024-11-25 12:26:42.323700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:116176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.587 [2024-11-25 12:26:42.323707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:02.587 [2024-11-25 12:26:42.323720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:116184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.587 [2024-11-25 12:26:42.323727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:02.587 [2024-11-25 12:26:42.323739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:116192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.587 [2024-11-25 12:26:42.323747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:02.587 [2024-11-25 12:26:42.323769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:116200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.587 [2024-11-25 12:26:42.323776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:02.587 [2024-11-25 12:26:42.323788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:116208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.587 [2024-11-25 12:26:42.323795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:02.587 [2024-11-25 12:26:42.323808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:116216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.587 [2024-11-25 12:26:42.323815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:02.587 [2024-11-25 12:26:42.323828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:116224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.587 [2024-11-25 12:26:42.323852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:02.587 [2024-11-25 12:26:42.323865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:116232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.587 [2024-11-25 12:26:42.323873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:02.587 [2024-11-25 12:26:42.323885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:116240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.587 [2024-11-25 12:26:42.323893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:02.587 [2024-11-25 12:26:42.323905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:116248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.587 [2024-11-25 12:26:42.323912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:02.587 [2024-11-25 12:26:42.323924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:116256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.587 [2024-11-25 12:26:42.323931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:02.587 [2024-11-25 12:26:42.323944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:116264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.587 [2024-11-25 12:26:42.323950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:02.587 [2024-11-25 12:26:42.323963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:116272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.588 [2024-11-25 12:26:42.323970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:02.588 [2024-11-25 12:26:42.323982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:116280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.588 [2024-11-25 12:26:42.323990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:02.588 [2024-11-25 12:26:42.324003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:116288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.588 [2024-11-25 12:26:42.324010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:02.588 [2024-11-25 12:26:42.324023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:116296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.588 [2024-11-25 12:26:42.324030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:02.588 [2024-11-25 12:26:42.324043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:116304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.588 [2024-11-25 12:26:42.324050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:02.588 [2024-11-25 12:26:42.324063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:116312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.588 [2024-11-25 12:26:42.324070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:02.588 [2024-11-25 12:26:42.324082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:116320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.588 [2024-11-25 12:26:42.324095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:02.588 [2024-11-25 12:26:42.324108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:116328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.588 [2024-11-25 12:26:42.324116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:02.588 [2024-11-25 12:26:42.324129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:116336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.588 [2024-11-25 12:26:42.324137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:02.588 [2024-11-25 12:26:42.324150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:116344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.588 [2024-11-25 12:26:42.324158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:02.588 [2024-11-25 12:26:42.324171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:116352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.588 [2024-11-25 12:26:42.324178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:02.588 [2024-11-25 12:26:42.324192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:116360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.588 [2024-11-25 12:26:42.324199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:02.588 [2024-11-25 12:26:42.324212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:116368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.588 [2024-11-25 12:26:42.324220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:02.588 [2024-11-25 12:26:42.324232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:116376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.588 [2024-11-25 12:26:42.324240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:02.588 [2024-11-25 12:26:42.324253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:116384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.588 [2024-11-25 12:26:42.324260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:02.588 [2024-11-25 12:26:42.324273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:116392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.588 [2024-11-25 12:26:42.324281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:02.588 [2024-11-25 12:26:42.324293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:116400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.588 [2024-11-25 12:26:42.324300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.588 [2024-11-25 12:26:42.324313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:116408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.588 [2024-11-25 12:26:42.324321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.588 [2024-11-25 12:26:42.324335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:116416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.588 [2024-11-25 12:26:42.324343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:02.588 [2024-11-25 12:26:42.324470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:116424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.588 [2024-11-25 12:26:42.324482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:02.588 [2024-11-25 12:26:42.324498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:116432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.588 [2024-11-25 12:26:42.324505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:02.588 [2024-11-25 12:26:42.324521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:116440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.588 [2024-11-25 12:26:42.324529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:02.588 [2024-11-25 12:26:42.324543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:116448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.588 [2024-11-25 12:26:42.324551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:02.588 [2024-11-25 12:26:42.324565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:116456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.588 [2024-11-25 12:26:42.324573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:02.588 [2024-11-25 12:26:42.324587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:116464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.588 [2024-11-25 12:26:42.324594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:02.588 [2024-11-25 12:26:42.324609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:116472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.588 [2024-11-25 12:26:42.324616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:02.588 [2024-11-25 12:26:42.324631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:116480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.588 [2024-11-25 12:26:42.324638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:02.588 [2024-11-25 12:26:42.324653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:116488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.588 [2024-11-25 12:26:42.324660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:02.588 [2024-11-25 12:26:42.324674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:116496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.588 [2024-11-25 12:26:42.324681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:02.588 [2024-11-25 12:26:42.324696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:116504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.588 [2024-11-25 12:26:42.324703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:02.588 [2024-11-25 12:26:42.324717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:116512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.588 [2024-11-25 12:26:42.324725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:02.588 [2024-11-25 12:26:42.324744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:116520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.588 [2024-11-25 12:26:42.324761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:02.588 [2024-11-25 12:26:42.324776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:116528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.588 [2024-11-25 12:26:42.324783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:02.588 [2024-11-25 12:26:42.324798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:116536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.588 [2024-11-25 12:26:42.324806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:02.588 [2024-11-25 12:26:42.324820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:116544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.588 [2024-11-25 12:26:42.324828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:02.588 [2024-11-25 12:26:42.324842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:116552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.588 [2024-11-25 12:26:42.324850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:02.588 [2024-11-25 12:26:42.324864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:116560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.588 [2024-11-25 12:26:42.324872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:02.588 [2024-11-25 12:26:42.324886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:116568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.588 [2024-11-25 12:26:42.324894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:02.589 [2024-11-25 12:26:42.324909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:115936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.589 [2024-11-25 12:26:42.324916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:02.589 [2024-11-25 12:26:42.324930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:115944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.589 [2024-11-25 12:26:42.324938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:02.589 [2024-11-25 12:26:42.324953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:115952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.589 [2024-11-25 12:26:42.324960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:02.589 [2024-11-25 12:26:42.324974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:115960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.589 [2024-11-25 12:26:42.324982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:02.589 [2024-11-25 12:26:42.324997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:115968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.589 [2024-11-25 12:26:42.325004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:02.589 [2024-11-25 12:26:42.325019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:115976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.589 [2024-11-25 12:26:42.325030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:02.589 [2024-11-25 12:26:42.325045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:115984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.589 [2024-11-25 12:26:42.325052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:02.589 [2024-11-25 12:26:42.325067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:115992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.589 [2024-11-25 12:26:42.325074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:02.589 [2024-11-25 12:26:42.325089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:116000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.589 [2024-11-25 12:26:42.325096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:02.589 [2024-11-25 12:26:42.325111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:116008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.589 [2024-11-25 12:26:42.325119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.589 [2024-11-25 12:26:42.325134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:116016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.589 [2024-11-25 12:26:42.325141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.589 [2024-11-25 12:26:42.325157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:116024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.589 [2024-11-25 12:26:42.325165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.589 [2024-11-25 12:26:42.326399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:116032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.589 [2024-11-25 12:26:42.326425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:02.589 [2024-11-25 12:26:42.326446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:116040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.589 [2024-11-25 12:26:42.326453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:02.589 [2024-11-25 12:26:42.326472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:116048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.589 [2024-11-25 12:26:42.326479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:02.589 [2024-11-25 12:26:42.326498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:116056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.589 [2024-11-25 12:26:42.326505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:02.589 [2024-11-25 12:26:42.326523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:116064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.589 [2024-11-25 12:26:42.326531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:02.589 [2024-11-25 12:26:42.326549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:116072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.589 [2024-11-25 12:26:42.326568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:02.589 [2024-11-25 12:26:42.326587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:116080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.589 [2024-11-25 12:26:42.326594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:02.589 [2024-11-25 12:26:42.326613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:116088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.589 [2024-11-25 12:26:42.326621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:02.589 [2024-11-25 12:26:42.326640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:116096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.589 [2024-11-25 12:26:42.326647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:02.589 [2024-11-25 12:26:42.326666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:116104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.589 [2024-11-25 12:26:42.326673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:02.589 [2024-11-25 12:26:42.326692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:116112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.589 [2024-11-25 12:26:42.326700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:02.589 [2024-11-25 12:26:42.326719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:116120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.589 [2024-11-25 12:26:42.326726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:02.589 [2024-11-25 12:26:42.326744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:116128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.589 [2024-11-25 12:26:42.326767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:02.589 [2024-11-25 12:26:42.326786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:116136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.589 [2024-11-25 12:26:42.326794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:02.589 [2024-11-25 12:26:42.326812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:116144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.589 [2024-11-25 12:26:42.326820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:02.589 [2024-11-25 12:26:42.326839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:116152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.589 [2024-11-25 12:26:42.326847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:02.589 12457.46 IOPS, 48.66 MiB/s [2024-11-25T12:26:58.465Z] 11567.64 IOPS, 45.19 MiB/s [2024-11-25T12:26:58.465Z] 10796.47 IOPS, 42.17 MiB/s [2024-11-25T12:26:58.465Z] 10267.88 IOPS, 40.11 MiB/s [2024-11-25T12:26:58.465Z] 10427.35 IOPS, 40.73 MiB/s [2024-11-25T12:26:58.465Z] 10574.28 IOPS, 41.31 MiB/s [2024-11-25T12:26:58.465Z] 10892.32 IOPS, 42.55 MiB/s [2024-11-25T12:26:58.465Z] 11238.15 IOPS, 43.90 MiB/s [2024-11-25T12:26:58.465Z] 11512.62 IOPS, 44.97 MiB/s [2024-11-25T12:26:58.465Z] 11590.95 IOPS, 45.28 MiB/s [2024-11-25T12:26:58.465Z] 11667.39 IOPS, 45.58 MiB/s [2024-11-25T12:26:58.465Z] 11782.04 IOPS, 46.02 MiB/s [2024-11-25T12:26:58.465Z] 12034.68 IOPS, 47.01 MiB/s [2024-11-25T12:26:58.465Z] 12262.88 IOPS, 47.90 MiB/s [2024-11-25T12:26:58.465Z] [2024-11-25 12:26:56.000760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.589 [2024-11-25 12:26:56.000805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:02.589 [2024-11-25 12:26:56.000823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:1616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.589 [2024-11-25 12:26:56.000831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:02.589 [2024-11-25 12:26:56.000843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.589 [2024-11-25 12:26:56.000850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:02.589 [2024-11-25 12:26:56.000862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.589 [2024-11-25 12:26:56.000869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:02.589 [2024-11-25 12:26:56.000881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.589 [2024-11-25 12:26:56.000888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:02.589 [2024-11-25 12:26:56.000900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.589 [2024-11-25 12:26:56.000906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:02.589 [2024-11-25 12:26:56.000918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.590 [2024-11-25 12:26:56.000925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:02.590 [2024-11-25 12:26:56.000937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.590 [2024-11-25 12:26:56.000943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:02.590 [2024-11-25 12:26:56.000955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.590 [2024-11-25 12:26:56.000962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:02.590 [2024-11-25 12:26:56.000974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.590 [2024-11-25 12:26:56.000981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:02.590 [2024-11-25 12:26:56.000993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.590 [2024-11-25 12:26:56.000999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:02.590 [2024-11-25 12:26:56.001011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:1088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.590 [2024-11-25 12:26:56.001019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:02.590 [2024-11-25 12:26:56.001030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.590 [2024-11-25 12:26:56.001056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:02.590 [2024-11-25 12:26:56.001068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.590 [2024-11-25 12:26:56.001075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:02.590 [2024-11-25 12:26:56.001087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.590 [2024-11-25 12:26:56.001094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:02.590 [2024-11-25 12:26:56.001106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.590 [2024-11-25 12:26:56.001113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:02.590 [2024-11-25 12:26:56.001125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:1792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.590 [2024-11-25 12:26:56.001131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:02.590 [2024-11-25 12:26:56.001144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.590 [2024-11-25 12:26:56.001150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:02.590 [2024-11-25 12:26:56.001162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.590 [2024-11-25 12:26:56.001169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:02.590 [2024-11-25 12:26:56.001181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.590 [2024-11-25 12:26:56.001188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:02.590 [2024-11-25 12:26:56.001200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.590 [2024-11-25 12:26:56.001206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.590 [2024-11-25 12:26:56.001218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.590 [2024-11-25 12:26:56.001224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.590 [2024-11-25 12:26:56.001236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.590 [2024-11-25 12:26:56.001243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:02.590 [2024-11-25 12:26:56.001256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.590 [2024-11-25 12:26:56.001263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:02.590 [2024-11-25 12:26:56.001275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.590 [2024-11-25 12:26:56.001282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:02.590 [2024-11-25 12:26:56.001299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.590 [2024-11-25 12:26:56.001306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:02.590 [2024-11-25 12:26:56.001318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.590 [2024-11-25 12:26:56.001325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:02.590 [2024-11-25 12:26:56.001337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.590 [2024-11-25 12:26:56.001344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:02.590 [2024-11-25 12:26:56.001356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.590 [2024-11-25 12:26:56.001362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:02.590 [2024-11-25 12:26:56.001375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.590 [2024-11-25 12:26:56.001382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:02.590 [2024-11-25 12:26:56.001393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.590 [2024-11-25 12:26:56.001401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:02.590 [2024-11-25 12:26:56.001413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.590 [2024-11-25 12:26:56.001420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:02.590 [2024-11-25 12:26:56.001432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.590 [2024-11-25 12:26:56.001439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:02.590 [2024-11-25 12:26:56.001451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.590 [2024-11-25 12:26:56.001459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:02.590 [2024-11-25 12:26:56.001471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.590 [2024-11-25 12:26:56.001478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:02.590 [2024-11-25 12:26:56.001491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.590 [2024-11-25 12:26:56.001497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:02.590 [2024-11-25 12:26:56.001509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.590 [2024-11-25 12:26:56.001516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:02.590 [2024-11-25 12:26:56.001533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.590 [2024-11-25 12:26:56.001540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:02.590 [2024-11-25 12:26:56.002124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.590 [2024-11-25 12:26:56.002140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:02.590 [2024-11-25 12:26:56.002154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.590 [2024-11-25 12:26:56.002161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:02.590 [2024-11-25 12:26:56.002173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.590 [2024-11-25 12:26:56.002180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:02.590 [2024-11-25 12:26:56.002192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.590 [2024-11-25 12:26:56.002212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:02.590 [2024-11-25 12:26:56.002225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.590 [2024-11-25 12:26:56.002231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:02.590 [2024-11-25 12:26:56.002243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.590 [2024-11-25 12:26:56.002250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:02.590 [2024-11-25 12:26:56.002262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.590 [2024-11-25 12:26:56.002269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:02.590 [2024-11-25 12:26:56.002281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.591 [2024-11-25 12:26:56.002288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:02.591 [2024-11-25 12:26:56.002300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.591 [2024-11-25 12:26:56.002307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:02.591 [2024-11-25 12:26:56.002319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.591 [2024-11-25 12:26:56.002326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:02.591 [2024-11-25 12:26:56.002338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.591 [2024-11-25 12:26:56.002345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:02.591 [2024-11-25 12:26:56.002357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.591 [2024-11-25 12:26:56.002371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:02.591 [2024-11-25 12:26:56.002383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.591 [2024-11-25 12:26:56.002390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:02.591 [2024-11-25 12:26:56.002402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.591 [2024-11-25 12:26:56.002409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.591 [2024-11-25 12:26:56.002421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.591 [2024-11-25 12:26:56.002428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.591 [2024-11-25 12:26:56.002440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.591 [2024-11-25 12:26:56.002447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.591 [2024-11-25 12:26:56.002459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.591 [2024-11-25 12:26:56.002466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:02.591 [2024-11-25 12:26:56.002477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.591 [2024-11-25 12:26:56.002484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:02.591 [2024-11-25 12:26:56.002497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.591 [2024-11-25 12:26:56.002506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:02.591 [2024-11-25 12:26:56.002971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.591 [2024-11-25 12:26:56.002985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:02.591 [2024-11-25 12:26:56.002998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.591 [2024-11-25 12:26:56.003005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:02.591 [2024-11-25 12:26:56.003017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.591 [2024-11-25 12:26:56.003024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:02.591 [2024-11-25 12:26:56.003036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.591 [2024-11-25 12:26:56.003043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:02.591 [2024-11-25 12:26:56.003055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.591 [2024-11-25 12:26:56.003068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:02.591 [2024-11-25 12:26:56.003080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.591 [2024-11-25 12:26:56.003087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:02.591 [2024-11-25 12:26:56.003099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.591 [2024-11-25 12:26:56.003106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:02.591 [2024-11-25 12:26:56.003118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.591 [2024-11-25 12:26:56.003124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:02.591 [2024-11-25 12:26:56.003137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.591 [2024-11-25 12:26:56.003144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:02.591 [2024-11-25 12:26:56.003156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.591 [2024-11-25 12:26:56.003163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:02.591 [2024-11-25 12:26:56.003174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.591 [2024-11-25 12:26:56.003182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:02.591 [2024-11-25 12:26:56.003194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.591 [2024-11-25 12:26:56.003200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:02.591 [2024-11-25 12:26:56.003212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.591 [2024-11-25 12:26:56.003219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:02.591 [2024-11-25 12:26:56.003231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.591 [2024-11-25 12:26:56.003237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:02.591 [2024-11-25 12:26:56.003249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.591 [2024-11-25 12:26:56.003256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:02.591 [2024-11-25 12:26:56.003268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.591 [2024-11-25 12:26:56.003276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:02.591 [2024-11-25 12:26:56.003288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.591 [2024-11-25 12:26:56.003295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:02.591 [2024-11-25 12:26:56.003310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.591 [2024-11-25 12:26:56.003317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:02.591 [2024-11-25 12:26:56.003329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.591 [2024-11-25 12:26:56.003336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:02.591 [2024-11-25 12:26:56.003347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.591 [2024-11-25 12:26:56.003354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:02.591 [2024-11-25 12:26:56.003366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.592 [2024-11-25 12:26:56.003373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:02.592 [2024-11-25 12:26:56.003384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.592 [2024-11-25 12:26:56.003391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:02.592 [2024-11-25 12:26:56.003404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.592 [2024-11-25 12:26:56.003410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:02.592 [2024-11-25 12:26:56.003422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.592 [2024-11-25 12:26:56.003429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:02.592 [2024-11-25 12:26:56.003442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.592 [2024-11-25 12:26:56.003449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:02.592 [2024-11-25 12:26:56.003461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.592 [2024-11-25 12:26:56.003468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:02.592 [2024-11-25 12:26:56.003480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.592 [2024-11-25 12:26:56.003487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:02.592 [2024-11-25 12:26:56.003499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.592 [2024-11-25 12:26:56.003505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.592 [2024-11-25 12:26:56.003517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.592 [2024-11-25 12:26:56.003524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.592 [2024-11-25 12:26:56.003540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.592 [2024-11-25 12:26:56.003546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:02.592 [2024-11-25 12:26:56.003887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.592 [2024-11-25 12:26:56.003899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:02.592 [2024-11-25 12:26:56.003912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.592 [2024-11-25 12:26:56.003919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:02.592 [2024-11-25 12:26:56.003931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.592 [2024-11-25 12:26:56.003938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:02.592 [2024-11-25 12:26:56.003950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.592 [2024-11-25 12:26:56.003957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:02.592 [2024-11-25 12:26:56.003969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.592 [2024-11-25 12:26:56.003976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:02.592 [2024-11-25 12:26:56.003988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.592 [2024-11-25 12:26:56.003995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:02.592 [2024-11-25 12:26:56.004008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:1920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.592 [2024-11-25 12:26:56.004015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:02.592 [2024-11-25 12:26:56.004028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.592 [2024-11-25 12:26:56.004034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:02.592 [2024-11-25 12:26:56.004047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.592 [2024-11-25 12:26:56.004054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:02.592 [2024-11-25 12:26:56.004066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.592 [2024-11-25 12:26:56.004073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:02.592 [2024-11-25 12:26:56.004086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.592 [2024-11-25 12:26:56.004092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:02.592 [2024-11-25 12:26:56.004105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.592 [2024-11-25 12:26:56.004118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:02.592 [2024-11-25 12:26:56.004130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.592 [2024-11-25 12:26:56.004137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:02.592 [2024-11-25 12:26:56.004149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.592 [2024-11-25 12:26:56.004156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:02.592 [2024-11-25 12:26:56.004168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.592 [2024-11-25 12:26:56.004176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:02.592 [2024-11-25 12:26:56.004506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.592 [2024-11-25 12:26:56.004518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:02.592 [2024-11-25 12:26:56.004531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.592 [2024-11-25 12:26:56.004538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:02.592 [2024-11-25 12:26:56.004550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.592 [2024-11-25 12:26:56.004556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:02.592 [2024-11-25 12:26:56.004569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.592 [2024-11-25 12:26:56.004575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:02.592 [2024-11-25 12:26:56.004588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:1960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.592 [2024-11-25 12:26:56.004595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:02.592 [2024-11-25 12:26:56.004607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.592 [2024-11-25 12:26:56.004613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:02.592 [2024-11-25 12:26:56.004625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.592 [2024-11-25 12:26:56.004632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:02.592 [2024-11-25 12:26:56.004644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.592 [2024-11-25 12:26:56.004651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:02.592 [2024-11-25 12:26:56.004663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.592 [2024-11-25 12:26:56.004676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:02.592 [2024-11-25 12:26:56.004688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.592 [2024-11-25 12:26:56.004695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:02.592 [2024-11-25 12:26:56.004707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.592 [2024-11-25 12:26:56.004714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:02.592 [2024-11-25 12:26:56.004726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.592 [2024-11-25 12:26:56.004733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:02.592 [2024-11-25 12:26:56.004745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.592 [2024-11-25 12:26:56.004762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:02.592 [2024-11-25 12:26:56.004774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.592 [2024-11-25 12:26:56.004780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:02.593 [2024-11-25 12:26:56.004793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.593 [2024-11-25 12:26:56.004799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.593 [2024-11-25 12:26:56.004811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.593 [2024-11-25 12:26:56.004819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.593 [2024-11-25 12:26:56.004831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.593 [2024-11-25 12:26:56.004838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:02.593 [2024-11-25 12:26:56.004850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.593 [2024-11-25 12:26:56.004856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:02.593 [2024-11-25 12:26:56.004868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.593 [2024-11-25 12:26:56.004875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:02.593 [2024-11-25 12:26:56.004887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.593 [2024-11-25 12:26:56.004894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:02.593 [2024-11-25 12:26:56.004912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.593 [2024-11-25 12:26:56.004919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:02.593 [2024-11-25 12:26:56.005246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.593 [2024-11-25 12:26:56.005259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:02.593 [2024-11-25 12:26:56.005272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.593 [2024-11-25 12:26:56.005280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:02.593 [2024-11-25 12:26:56.005292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.593 [2024-11-25 12:26:56.005299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:02.593 [2024-11-25 12:26:56.005311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.593 [2024-11-25 12:26:56.005318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:02.593 [2024-11-25 12:26:56.005330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.593 [2024-11-25 12:26:56.005336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:02.593 [2024-11-25 12:26:56.005348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.593 [2024-11-25 12:26:56.005356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:02.593 [2024-11-25 12:26:56.005368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.593 [2024-11-25 12:26:56.005375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:02.593 [2024-11-25 12:26:56.006626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.593 [2024-11-25 12:26:56.006642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:02.593 [2024-11-25 12:26:56.006657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.593 [2024-11-25 12:26:56.006664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:02.593 [2024-11-25 12:26:56.006676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.593 [2024-11-25 12:26:56.006683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:02.593 [2024-11-25 12:26:56.006695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.593 [2024-11-25 12:26:56.006702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:02.593 [2024-11-25 12:26:56.006715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.593 [2024-11-25 12:26:56.006721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:02.593 [2024-11-25 12:26:56.006740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.593 [2024-11-25 12:26:56.006747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:02.593 [2024-11-25 12:26:56.006770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.593 [2024-11-25 12:26:56.006777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:02.593 [2024-11-25 12:26:56.006789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.593 [2024-11-25 12:26:56.006796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:02.593 [2024-11-25 12:26:56.006811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.593 [2024-11-25 12:26:56.006818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:02.593 [2024-11-25 12:26:56.006830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.593 [2024-11-25 12:26:56.006837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:02.593 [2024-11-25 12:26:56.006853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.593 [2024-11-25 12:26:56.006860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:02.593 [2024-11-25 12:26:56.006873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.593 [2024-11-25 12:26:56.006880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:02.593 [2024-11-25 12:26:56.006892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.593 [2024-11-25 12:26:56.006899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:02.593 [2024-11-25 12:26:56.006911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.593 [2024-11-25 12:26:56.006918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:02.593 [2024-11-25 12:26:56.006930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.593 [2024-11-25 12:26:56.006937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:02.593 [2024-11-25 12:26:56.006949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.593 [2024-11-25 12:26:56.006956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:02.593 [2024-11-25 12:26:56.006968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.593 [2024-11-25 12:26:56.006976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:02.593 [2024-11-25 12:26:56.006988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.593 [2024-11-25 12:26:56.006999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:02.593 [2024-11-25 12:26:56.007011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.593 [2024-11-25 12:26:56.007018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.593 [2024-11-25 12:26:56.007030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.593 [2024-11-25 12:26:56.007037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.593 [2024-11-25 12:26:56.007050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.593 [2024-11-25 12:26:56.007056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:02.593 [2024-11-25 12:26:56.007412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.593 [2024-11-25 12:26:56.007424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:02.593 [2024-11-25 12:26:56.007438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.593 [2024-11-25 12:26:56.007445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:02.593 [2024-11-25 12:26:56.007457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.593 [2024-11-25 12:26:56.007464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:02.593 [2024-11-25 12:26:56.007478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.593 [2024-11-25 12:26:56.007485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:02.594 [2024-11-25 12:26:56.007497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.594 [2024-11-25 12:26:56.007504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:02.594 [2024-11-25 12:26:56.007518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.594 [2024-11-25 12:26:56.007525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:02.594 [2024-11-25 12:26:56.007537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.594 [2024-11-25 12:26:56.007545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:02.594 [2024-11-25 12:26:56.007557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.594 [2024-11-25 12:26:56.007564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:02.594 [2024-11-25 12:26:56.007576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.594 [2024-11-25 12:26:56.007588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:02.594 [2024-11-25 12:26:56.007601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.594 [2024-11-25 12:26:56.007608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:02.594 [2024-11-25 12:26:56.007620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.594 [2024-11-25 12:26:56.007627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:02.594 [2024-11-25 12:26:56.007639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.594 [2024-11-25 12:26:56.007646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:02.594 [2024-11-25 12:26:56.007658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.594 [2024-11-25 12:26:56.007665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:02.594 [2024-11-25 12:26:56.007677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.594 [2024-11-25 12:26:56.007684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:02.594 [2024-11-25 12:26:56.007697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.594 [2024-11-25 12:26:56.007704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:02.594 [2024-11-25 12:26:56.007716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.594 [2024-11-25 12:26:56.007723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:02.594 [2024-11-25 12:26:56.007735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.594 [2024-11-25 12:26:56.007742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:02.594 [2024-11-25 12:26:56.008013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.594 [2024-11-25 12:26:56.008024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:02.594 [2024-11-25 12:26:56.008038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.594 [2024-11-25 12:26:56.008045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:02.594 [2024-11-25 12:26:56.008057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.594 [2024-11-25 12:26:56.008064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:02.594 [2024-11-25 12:26:56.008077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.594 [2024-11-25 12:26:56.008084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:02.594 [2024-11-25 12:26:56.008105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.594 [2024-11-25 12:26:56.008112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:02.594 [2024-11-25 12:26:56.008125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.594 [2024-11-25 12:26:56.008132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:02.594 [2024-11-25 12:26:56.008144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.594 [2024-11-25 12:26:56.008151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:02.594 [2024-11-25 12:26:56.008163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.594 [2024-11-25 12:26:56.008170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:02.594 [2024-11-25 12:26:56.008182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.594 [2024-11-25 12:26:56.008189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:02.594 [2024-11-25 12:26:56.008201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.594 [2024-11-25 12:26:56.008208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:02.594 [2024-11-25 12:26:56.008220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.594 [2024-11-25 12:26:56.008227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:02.594 [2024-11-25 12:26:56.008464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.594 [2024-11-25 12:26:56.008475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.594 [2024-11-25 12:26:56.008488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.594 [2024-11-25 12:26:56.008495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.594 [2024-11-25 12:26:56.008508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.594 [2024-11-25 12:26:56.008515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.594 [2024-11-25 12:26:56.008527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.594 [2024-11-25 12:26:56.008534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:02.594 [2024-11-25 12:26:56.008546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.594 [2024-11-25 12:26:56.008553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:02.594 [2024-11-25 12:26:56.008565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.594 [2024-11-25 12:26:56.008578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:02.594 [2024-11-25 12:26:56.008590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.594 [2024-11-25 12:26:56.008597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:02.594 [2024-11-25 12:26:56.008609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.594 [2024-11-25 12:26:56.008616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:02.594 [2024-11-25 12:26:56.008628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.594 [2024-11-25 12:26:56.008635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:02.594 [2024-11-25 12:26:56.008648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.594 [2024-11-25 12:26:56.008655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:02.594 [2024-11-25 12:26:56.008667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.594 [2024-11-25 12:26:56.008674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:02.594 [2024-11-25 12:26:56.008687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.594 [2024-11-25 12:26:56.008694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:02.594 [2024-11-25 12:26:56.010211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.594 [2024-11-25 12:26:56.010228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:02.594 [2024-11-25 12:26:56.010242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.594 [2024-11-25 12:26:56.010249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:02.594 [2024-11-25 12:26:56.010261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.594 [2024-11-25 12:26:56.010268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:02.595 [2024-11-25 12:26:56.010280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.595 [2024-11-25 12:26:56.010288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:02.595 [2024-11-25 12:26:56.010300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.595 [2024-11-25 12:26:56.010306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:02.595 [2024-11-25 12:26:56.010319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.595 [2024-11-25 12:26:56.010342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:02.595 [2024-11-25 12:26:56.010355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.595 [2024-11-25 12:26:56.010362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:02.595 [2024-11-25 12:26:56.010374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.595 [2024-11-25 12:26:56.010381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:02.595 [2024-11-25 12:26:56.010393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:1744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.595 [2024-11-25 12:26:56.010401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:02.595 [2024-11-25 12:26:56.010413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.595 [2024-11-25 12:26:56.010420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:02.595 [2024-11-25 12:26:56.010432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.595 [2024-11-25 12:26:56.010439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:02.595 [2024-11-25 12:26:56.010451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.595 [2024-11-25 12:26:56.010458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:02.595 [2024-11-25 12:26:56.010470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.595 [2024-11-25 12:26:56.010477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:02.595 [2024-11-25 12:26:56.010490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.595 [2024-11-25 12:26:56.010497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:02.595 [2024-11-25 12:26:56.010509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.595 [2024-11-25 12:26:56.010516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:02.595 [2024-11-25 12:26:56.012132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.595 [2024-11-25 12:26:56.012148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:02.595 [2024-11-25 12:26:56.012162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.595 [2024-11-25 12:26:56.012169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:02.595 [2024-11-25 12:26:56.012181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.595 [2024-11-25 12:26:56.012187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:02.595 [2024-11-25 12:26:56.012206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.595 [2024-11-25 12:26:56.012213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:02.595 [2024-11-25 12:26:56.012226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.595 [2024-11-25 12:26:56.012232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:02.595 [2024-11-25 12:26:56.012245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.595 [2024-11-25 12:26:56.012251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:02.595 [2024-11-25 12:26:56.012263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.595 [2024-11-25 12:26:56.012270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.595 [2024-11-25 12:26:56.012283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.595 [2024-11-25 12:26:56.012289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.595 [2024-11-25 12:26:56.012301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:2528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.595 [2024-11-25 12:26:56.012308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:02.595 [2024-11-25 12:26:56.012320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:2544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.595 [2024-11-25 12:26:56.012327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:02.595 [2024-11-25 12:26:56.012339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.595 [2024-11-25 12:26:56.012346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:02.595 [2024-11-25 12:26:56.012358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.595 [2024-11-25 12:26:56.012365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:02.595 [2024-11-25 12:26:56.012377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.595 [2024-11-25 12:26:56.012384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:02.595 [2024-11-25 12:26:56.012396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.595 [2024-11-25 12:26:56.012403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:02.595 [2024-11-25 12:26:56.012415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.595 [2024-11-25 12:26:56.012422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:02.595 [2024-11-25 12:26:56.012438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.595 [2024-11-25 12:26:56.012445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:02.595 [2024-11-25 12:26:56.012457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.595 [2024-11-25 12:26:56.012464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:02.595 [2024-11-25 12:26:56.012476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.595 [2024-11-25 12:26:56.012483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:02.595 [2024-11-25 12:26:56.012495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.595 [2024-11-25 12:26:56.012502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:02.595 [2024-11-25 12:26:56.012514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.595 [2024-11-25 12:26:56.012521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:02.595 [2024-11-25 12:26:56.012533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.595 [2024-11-25 12:26:56.012540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:02.595 [2024-11-25 12:26:56.012552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.595 [2024-11-25 12:26:56.012559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:02.595 [2024-11-25 12:26:56.012571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.595 [2024-11-25 12:26:56.012578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:02.595 [2024-11-25 12:26:56.012591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.595 [2024-11-25 12:26:56.012598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:02.595 [2024-11-25 12:26:56.012610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.595 [2024-11-25 12:26:56.012616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:02.595 [2024-11-25 12:26:56.012628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.595 [2024-11-25 12:26:56.012635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:02.595 [2024-11-25 12:26:56.012647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.596 [2024-11-25 12:26:56.012653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:02.596 [2024-11-25 12:26:56.012665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.596 [2024-11-25 12:26:56.012675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:02.596 [2024-11-25 12:26:56.012687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.596 [2024-11-25 12:26:56.012694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:02.596 [2024-11-25 12:26:56.012706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.596 [2024-11-25 12:26:56.012712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:02.596 [2024-11-25 12:26:56.012725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.596 [2024-11-25 12:26:56.012732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:02.596 [2024-11-25 12:26:56.012744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.596 [2024-11-25 12:26:56.012761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:02.596 [2024-11-25 12:26:56.012773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.596 [2024-11-25 12:26:56.012779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:02.596 [2024-11-25 12:26:56.012791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.596 [2024-11-25 12:26:56.012798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:02.596 [2024-11-25 12:26:56.012809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.596 [2024-11-25 12:26:56.012816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:02.596 [2024-11-25 12:26:56.012828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.596 [2024-11-25 12:26:56.012835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:02.596 [2024-11-25 12:26:56.013493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.596 [2024-11-25 12:26:56.013506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:02.596 [2024-11-25 12:26:56.013518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.596 [2024-11-25 12:26:56.013525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:02.596 [2024-11-25 12:26:56.013537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.596 [2024-11-25 12:26:56.013544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.596 [2024-11-25 12:26:56.013555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.596 [2024-11-25 12:26:56.013568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.596 [2024-11-25 12:26:56.013580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.596 [2024-11-25 12:26:56.013587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:02.596 [2024-11-25 12:26:56.013599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.596 [2024-11-25 12:26:56.013605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:02.596 [2024-11-25 12:26:56.013617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.596 [2024-11-25 12:26:56.013623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:02.596 [2024-11-25 12:26:56.013635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.596 [2024-11-25 12:26:56.013642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:02.596 [2024-11-25 12:26:56.013653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.596 [2024-11-25 12:26:56.013660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:02.596 [2024-11-25 12:26:56.013672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.596 [2024-11-25 12:26:56.013679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:02.596 [2024-11-25 12:26:56.013691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.596 [2024-11-25 12:26:56.013697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:02.596 [2024-11-25 12:26:56.013709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.596 [2024-11-25 12:26:56.013716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:02.596 [2024-11-25 12:26:56.013727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.596 [2024-11-25 12:26:56.013734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:02.596 [2024-11-25 12:26:56.013746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.596 [2024-11-25 12:26:56.013762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:02.596 [2024-11-25 12:26:56.013774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.596 [2024-11-25 12:26:56.013781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:02.596 [2024-11-25 12:26:56.013793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.596 [2024-11-25 12:26:56.013800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:02.596 [2024-11-25 12:26:56.013815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.596 [2024-11-25 12:26:56.013822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:02.596 [2024-11-25 12:26:56.013834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.596 [2024-11-25 12:26:56.013840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:02.596 [2024-11-25 12:26:56.013852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.596 [2024-11-25 12:26:56.013859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:02.596 [2024-11-25 12:26:56.013871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.596 [2024-11-25 12:26:56.013878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:02.596 [2024-11-25 12:26:56.014679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.596 [2024-11-25 12:26:56.014693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:02.596 [2024-11-25 12:26:56.014706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.596 [2024-11-25 12:26:56.014713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:02.596 [2024-11-25 12:26:56.014725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.596 [2024-11-25 12:26:56.014732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:02.596 [2024-11-25 12:26:56.014744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.596 [2024-11-25 12:26:56.014760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:02.597 [2024-11-25 12:26:56.014772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.597 [2024-11-25 12:26:56.014779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:02.597 [2024-11-25 12:26:56.014791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.597 [2024-11-25 12:26:56.014798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:02.597 [2024-11-25 12:26:56.014810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.597 [2024-11-25 12:26:56.014816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:02.597 [2024-11-25 12:26:56.014828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.597 [2024-11-25 12:26:56.014834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:02.597 [2024-11-25 12:26:56.014853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.597 [2024-11-25 12:26:56.014860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:02.597 [2024-11-25 12:26:56.014871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.597 [2024-11-25 12:26:56.014878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:02.597 [2024-11-25 12:26:56.014890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.597 [2024-11-25 12:26:56.014897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:02.597 [2024-11-25 12:26:56.014908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.597 [2024-11-25 12:26:56.014915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:02.597 [2024-11-25 12:26:56.014927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.597 [2024-11-25 12:26:56.014934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:02.597 [2024-11-25 12:26:56.014945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.597 [2024-11-25 12:26:56.014952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:02.597 [2024-11-25 12:26:56.014964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.597 [2024-11-25 12:26:56.014971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.597 [2024-11-25 12:26:56.014983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.597 [2024-11-25 12:26:56.014989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.597 [2024-11-25 12:26:56.015001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.597 [2024-11-25 12:26:56.015007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:02.597 [2024-11-25 12:26:56.015019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.597 [2024-11-25 12:26:56.015026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:02.597 [2024-11-25 12:26:56.015826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.597 [2024-11-25 12:26:56.015842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:02.597 [2024-11-25 12:26:56.015855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.597 [2024-11-25 12:26:56.015862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:02.597 [2024-11-25 12:26:56.015874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.597 [2024-11-25 12:26:56.015887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:02.597 [2024-11-25 12:26:56.015900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.597 [2024-11-25 12:26:56.015906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:02.597 [2024-11-25 12:26:56.015918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.597 [2024-11-25 12:26:56.015925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:02.597 [2024-11-25 12:26:56.015937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.597 [2024-11-25 12:26:56.015943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:02.597 [2024-11-25 12:26:56.015956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.597 [2024-11-25 12:26:56.015962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:02.597 [2024-11-25 12:26:56.015974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.597 [2024-11-25 12:26:56.015981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:02.597 [2024-11-25 12:26:56.015993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.597 [2024-11-25 12:26:56.016000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:02.597 [2024-11-25 12:26:56.016012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.597 [2024-11-25 12:26:56.016018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:02.597 [2024-11-25 12:26:56.016030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.597 [2024-11-25 12:26:56.016037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:02.597 [2024-11-25 12:26:56.016049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.597 [2024-11-25 12:26:56.016055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:02.597 [2024-11-25 12:26:56.016067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.597 [2024-11-25 12:26:56.016073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:02.597 [2024-11-25 12:26:56.016085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.597 [2024-11-25 12:26:56.016092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:02.597 [2024-11-25 12:26:56.016104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:2792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.597 [2024-11-25 12:26:56.016114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:02.597 [2024-11-25 12:26:56.016127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.597 [2024-11-25 12:26:56.016134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:02.597 [2024-11-25 12:26:56.016146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.597 [2024-11-25 12:26:56.016153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:02.597 [2024-11-25 12:26:56.016164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:2888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.597 [2024-11-25 12:26:56.016171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:02.597 [2024-11-25 12:26:56.016183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.597 [2024-11-25 12:26:56.016189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:02.597 [2024-11-25 12:26:56.016201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.597 [2024-11-25 12:26:56.016208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:02.597 [2024-11-25 12:26:56.016220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.597 [2024-11-25 12:26:56.016227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:02.597 [2024-11-25 12:26:56.016239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.597 [2024-11-25 12:26:56.016246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:02.597 [2024-11-25 12:26:56.016258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.597 [2024-11-25 12:26:56.016265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:02.597 [2024-11-25 12:26:56.016277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.597 [2024-11-25 12:26:56.016284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:02.598 [2024-11-25 12:26:56.016296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.598 [2024-11-25 12:26:56.016303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:02.598 [2024-11-25 12:26:56.016315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.598 [2024-11-25 12:26:56.016322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:02.598 [2024-11-25 12:26:56.016334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.598 [2024-11-25 12:26:56.016341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:02.598 [2024-11-25 12:26:56.016356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.598 [2024-11-25 12:26:56.016363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.598 [2024-11-25 12:26:56.016376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.598 [2024-11-25 12:26:56.016382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.598 [2024-11-25 12:26:56.016395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.598 [2024-11-25 12:26:56.016401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.598 [2024-11-25 12:26:56.016413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:2752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.598 [2024-11-25 12:26:56.016421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:02.598 [2024-11-25 12:26:56.016433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.598 [2024-11-25 12:26:56.016440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:02.598 [2024-11-25 12:26:56.016452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.598 [2024-11-25 12:26:56.016459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:02.598 [2024-11-25 12:26:56.016471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.598 [2024-11-25 12:26:56.016478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:02.598 [2024-11-25 12:26:56.016490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.598 [2024-11-25 12:26:56.016497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:02.598 [2024-11-25 12:26:56.016509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.598 [2024-11-25 12:26:56.016516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:02.598 [2024-11-25 12:26:56.016528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.598 [2024-11-25 12:26:56.016535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:02.598 [2024-11-25 12:26:56.017307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.598 [2024-11-25 12:26:56.017322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:02.598 [2024-11-25 12:26:56.017336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.598 [2024-11-25 12:26:56.017342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:02.598 [2024-11-25 12:26:56.017355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.598 [2024-11-25 12:26:56.017367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:02.598 [2024-11-25 12:26:56.017380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.598 [2024-11-25 12:26:56.017386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:02.598 [2024-11-25 12:26:56.017398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.598 [2024-11-25 12:26:56.017405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:02.598 [2024-11-25 12:26:56.017417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.598 [2024-11-25 12:26:56.017424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:02.598 [2024-11-25 12:26:56.017436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.598 [2024-11-25 12:26:56.017442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:02.598 [2024-11-25 12:26:56.017455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.598 [2024-11-25 12:26:56.017461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:02.598 [2024-11-25 12:26:56.017734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.598 [2024-11-25 12:26:56.017745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:02.598 [2024-11-25 12:26:56.017769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.598 [2024-11-25 12:26:56.017777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:02.598 [2024-11-25 12:26:56.017789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.598 [2024-11-25 12:26:56.017796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:02.598 [2024-11-25 12:26:56.017808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.598 [2024-11-25 12:26:56.017815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:02.598 [2024-11-25 12:26:56.017827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.598 [2024-11-25 12:26:56.017833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:02.598 [2024-11-25 12:26:56.017845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:3120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.598 [2024-11-25 12:26:56.017852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:02.598 [2024-11-25 12:26:56.017864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.598 [2024-11-25 12:26:56.017876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:02.598 [2024-11-25 12:26:56.017889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.598 [2024-11-25 12:26:56.017895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:02.598 [2024-11-25 12:26:56.017907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.598 [2024-11-25 12:26:56.017914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:02.598 [2024-11-25 12:26:56.017926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.598 [2024-11-25 12:26:56.017933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:02.598 [2024-11-25 12:26:56.017946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.598 [2024-11-25 12:26:56.017952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:02.598 [2024-11-25 12:26:56.017964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:2656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.598 [2024-11-25 12:26:56.017971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:02.598 [2024-11-25 12:26:56.017983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.598 [2024-11-25 12:26:56.017989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:02.598 [2024-11-25 12:26:56.018001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.598 [2024-11-25 12:26:56.018008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:02.598 [2024-11-25 12:26:56.018020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.598 [2024-11-25 12:26:56.018027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:02.598 [2024-11-25 12:26:56.018039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.599 [2024-11-25 12:26:56.018045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.599 [2024-11-25 12:26:56.018057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.599 [2024-11-25 12:26:56.018064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.599 [2024-11-25 12:26:56.018076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.599 [2024-11-25 12:26:56.018083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:02.599 [2024-11-25 12:26:56.018095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.599 [2024-11-25 12:26:56.018102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:02.599 [2024-11-25 12:26:56.018117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.599 [2024-11-25 12:26:56.018124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:02.599 [2024-11-25 12:26:56.018136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.599 [2024-11-25 12:26:56.018143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:02.599 [2024-11-25 12:26:56.018155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.599 [2024-11-25 12:26:56.018161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:02.599 [2024-11-25 12:26:56.018173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.599 [2024-11-25 12:26:56.018180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:02.599 [2024-11-25 12:26:56.018192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.599 [2024-11-25 12:26:56.018205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:02.599 [2024-11-25 12:26:56.018217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.599 [2024-11-25 12:26:56.018224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:02.599 [2024-11-25 12:26:56.018236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:3024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.599 [2024-11-25 12:26:56.018242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:02.599 [2024-11-25 12:26:56.018254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.599 [2024-11-25 12:26:56.018261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:02.599 [2024-11-25 12:26:56.018273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.599 [2024-11-25 12:26:56.018281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:02.599 [2024-11-25 12:26:56.018294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.599 [2024-11-25 12:26:56.018300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:02.599 [2024-11-25 12:26:56.018312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.599 [2024-11-25 12:26:56.018319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:02.599 [2024-11-25 12:26:56.018709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.599 [2024-11-25 12:26:56.018721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:02.599 [2024-11-25 12:26:56.018742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.599 [2024-11-25 12:26:56.018759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:02.599 [2024-11-25 12:26:56.018771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.599 [2024-11-25 12:26:56.018778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:02.599 [2024-11-25 12:26:56.018790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:3192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.599 [2024-11-25 12:26:56.018797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:02.599 [2024-11-25 12:26:56.018811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.599 [2024-11-25 12:26:56.018818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:02.599 [2024-11-25 12:26:56.018830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:3224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.599 [2024-11-25 12:26:56.018837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:02.599 [2024-11-25 12:26:56.018850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.599 [2024-11-25 12:26:56.018856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:02.599 [2024-11-25 12:26:56.018869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.599 [2024-11-25 12:26:56.018875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:02.599 [2024-11-25 12:26:56.018888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.599 [2024-11-25 12:26:56.018895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:02.599 [2024-11-25 12:26:56.018907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.599 [2024-11-25 12:26:56.018914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:02.599 [2024-11-25 12:26:56.018926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.599 [2024-11-25 12:26:56.018933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:02.599 [2024-11-25 12:26:56.019848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.599 [2024-11-25 12:26:56.019863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:02.599 [2024-11-25 12:26:56.019877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.599 [2024-11-25 12:26:56.019884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:02.599 [2024-11-25 12:26:56.019897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.599 [2024-11-25 12:26:56.019911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:02.599 [2024-11-25 12:26:56.019923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.599 [2024-11-25 12:26:56.019930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:02.599 [2024-11-25 12:26:56.019942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.599 [2024-11-25 12:26:56.019949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:02.599 [2024-11-25 12:26:56.019962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.599 [2024-11-25 12:26:56.019969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:02.599 [2024-11-25 12:26:56.019981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.599 [2024-11-25 12:26:56.019988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.599 [2024-11-25 12:26:56.020000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.599 [2024-11-25 12:26:56.020007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.599 [2024-11-25 12:26:56.020020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.599 [2024-11-25 12:26:56.020026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:02.599 [2024-11-25 12:26:56.020039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.599 [2024-11-25 12:26:56.020046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:02.599 [2024-11-25 12:26:56.020058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.599 [2024-11-25 12:26:56.020065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:02.599 [2024-11-25 12:26:56.020077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.599 [2024-11-25 12:26:56.020084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:02.599 [2024-11-25 12:26:56.020096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.599 [2024-11-25 12:26:56.020103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:02.599 [2024-11-25 12:26:56.020115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.600 [2024-11-25 12:26:56.020122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:02.600 [2024-11-25 12:26:56.020134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.600 [2024-11-25 12:26:56.020144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:02.600 [2024-11-25 12:26:56.020157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:3024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.600 [2024-11-25 12:26:56.020163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:02.600 [2024-11-25 12:26:56.020175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.600 [2024-11-25 12:26:56.020182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:02.600 [2024-11-25 12:26:56.020194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.600 [2024-11-25 12:26:56.020201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:02.600 [2024-11-25 12:26:56.020213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.600 [2024-11-25 12:26:56.020220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:02.600 [2024-11-25 12:26:56.020233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.600 [2024-11-25 12:26:56.020239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:02.600 [2024-11-25 12:26:56.020252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.600 [2024-11-25 12:26:56.020258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:02.600 [2024-11-25 12:26:56.020271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.600 [2024-11-25 12:26:56.020277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:02.600 [2024-11-25 12:26:56.020289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.600 [2024-11-25 12:26:56.020296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:02.600 [2024-11-25 12:26:56.020308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.600 [2024-11-25 12:26:56.020315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:02.600 [2024-11-25 12:26:56.020327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.600 [2024-11-25 12:26:56.020334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:02.600 [2024-11-25 12:26:56.020346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.600 [2024-11-25 12:26:56.020354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:02.600 [2024-11-25 12:26:56.021219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.600 [2024-11-25 12:26:56.021234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:02.600 [2024-11-25 12:26:56.021253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.600 [2024-11-25 12:26:56.021260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:02.600 [2024-11-25 12:26:56.021273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:3264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.600 [2024-11-25 12:26:56.021280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:02.600 [2024-11-25 12:26:56.021292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.600 [2024-11-25 12:26:56.021299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:02.600 [2024-11-25 12:26:56.021311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.600 [2024-11-25 12:26:56.021318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:02.600 [2024-11-25 12:26:56.021330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:3312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.600 [2024-11-25 12:26:56.021337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:02.600 [2024-11-25 12:26:56.021349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.600 [2024-11-25 12:26:56.021356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:02.600 [2024-11-25 12:26:56.021368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.600 [2024-11-25 12:26:56.021375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:02.600 [2024-11-25 12:26:56.021387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.600 [2024-11-25 12:26:56.021394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:02.600 [2024-11-25 12:26:56.021406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.600 [2024-11-25 12:26:56.021413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:02.600 [2024-11-25 12:26:56.021426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.600 [2024-11-25 12:26:56.021432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:02.600 [2024-11-25 12:26:56.021444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.600 [2024-11-25 12:26:56.021451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:02.600 [2024-11-25 12:26:56.021463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.600 [2024-11-25 12:26:56.021470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.600 [2024-11-25 12:26:56.021485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.600 [2024-11-25 12:26:56.021492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.600 [2024-11-25 12:26:56.021504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.600 [2024-11-25 12:26:56.021511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:02.600 [2024-11-25 12:26:56.021523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.600 [2024-11-25 12:26:56.021530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:02.600 [2024-11-25 12:26:56.021542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.600 [2024-11-25 12:26:56.021549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:02.600 [2024-11-25 12:26:56.021561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.600 [2024-11-25 12:26:56.021568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:02.600 [2024-11-25 12:26:56.021580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.600 [2024-11-25 12:26:56.021587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:02.601 [2024-11-25 12:26:56.021599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:2696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.601 [2024-11-25 12:26:56.021606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:02.601 [2024-11-25 12:26:56.021619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.601 [2024-11-25 12:26:56.021625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:02.601 [2024-11-25 12:26:56.021637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.601 [2024-11-25 12:26:56.021644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:02.601 [2024-11-25 12:26:56.021656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.601 [2024-11-25 12:26:56.021663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:02.601 [2024-11-25 12:26:56.021675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.601 [2024-11-25 12:26:56.021682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:02.601 [2024-11-25 12:26:56.021694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:3160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.601 [2024-11-25 12:26:56.021701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:02.601 [2024-11-25 12:26:56.021713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:3224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.601 [2024-11-25 12:26:56.021724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:02.601 [2024-11-25 12:26:56.021736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.601 [2024-11-25 12:26:56.021743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:02.601 [2024-11-25 12:26:56.023288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.601 [2024-11-25 12:26:56.023304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:02.601 [2024-11-25 12:26:56.023318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.601 [2024-11-25 12:26:56.023325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:02.601 [2024-11-25 12:26:56.023338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.601 [2024-11-25 12:26:56.023345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:02.601 [2024-11-25 12:26:56.023357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.601 [2024-11-25 12:26:56.023364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:02.601 [2024-11-25 12:26:56.023377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:3432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.601 [2024-11-25 12:26:56.023383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:02.601 [2024-11-25 12:26:56.023395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:3448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.601 [2024-11-25 12:26:56.023402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:02.601 [2024-11-25 12:26:56.023414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.601 [2024-11-25 12:26:56.023421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:02.601 [2024-11-25 12:26:56.023434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.601 [2024-11-25 12:26:56.023440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:02.601 [2024-11-25 12:26:56.023453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.601 [2024-11-25 12:26:56.023460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:02.601 [2024-11-25 12:26:56.023472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:3040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.601 [2024-11-25 12:26:56.023479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:02.601 [2024-11-25 12:26:56.023491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.601 [2024-11-25 12:26:56.023504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:02.601 [2024-11-25 12:26:56.023516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.601 [2024-11-25 12:26:56.023523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:02.601 [2024-11-25 12:26:56.023536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.601 [2024-11-25 12:26:56.023543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:02.601 [2024-11-25 12:26:56.023555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.601 [2024-11-25 12:26:56.023562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:02.601 [2024-11-25 12:26:56.023574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.601 [2024-11-25 12:26:56.023581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:02.601 [2024-11-25 12:26:56.023593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:3280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.601 [2024-11-25 12:26:56.023600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:02.601 [2024-11-25 12:26:56.023612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:3312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.601 [2024-11-25 12:26:56.023619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.601 [2024-11-25 12:26:56.023630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:3344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.601 [2024-11-25 12:26:56.023637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.601 [2024-11-25 12:26:56.023649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.601 [2024-11-25 12:26:56.023656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.601 [2024-11-25 12:26:56.023668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.601 [2024-11-25 12:26:56.023675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:02.601 [2024-11-25 12:26:56.023688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.601 [2024-11-25 12:26:56.023694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:02.601 [2024-11-25 12:26:56.023707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:3056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.601 [2024-11-25 12:26:56.023713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:02.601 [2024-11-25 12:26:56.023726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.601 [2024-11-25 12:26:56.023733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:02.601 [2024-11-25 12:26:56.023759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.601 [2024-11-25 12:26:56.023767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:02.601 [2024-11-25 12:26:56.023779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.601 [2024-11-25 12:26:56.023786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:02.601 [2024-11-25 12:26:56.023798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.601 [2024-11-25 12:26:56.023805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:02.601 [2024-11-25 12:26:56.023817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.601 [2024-11-25 12:26:56.023824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:02.601 [2024-11-25 12:26:56.023836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.601 [2024-11-25 12:26:56.023843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:02.601 [2024-11-25 12:26:56.023855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.601 [2024-11-25 12:26:56.023862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:02.601 [2024-11-25 12:26:56.023874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.601 [2024-11-25 12:26:56.023881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:02.602 [2024-11-25 12:26:56.023893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.602 [2024-11-25 12:26:56.023900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:02.602 [2024-11-25 12:26:56.023912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.602 [2024-11-25 12:26:56.023919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:02.602 [2024-11-25 12:26:56.025327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.602 [2024-11-25 12:26:56.025343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:02.602 [2024-11-25 12:26:56.025357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.602 [2024-11-25 12:26:56.025364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:02.602 [2024-11-25 12:26:56.025376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.602 [2024-11-25 12:26:56.025383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:02.602 [2024-11-25 12:26:56.025402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:3512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.602 [2024-11-25 12:26:56.025409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:02.602 [2024-11-25 12:26:56.025421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.602 [2024-11-25 12:26:56.025428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:02.602 [2024-11-25 12:26:56.025440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.602 [2024-11-25 12:26:56.025448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:02.602 [2024-11-25 12:26:56.025460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:3560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.602 [2024-11-25 12:26:56.025467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:02.602 [2024-11-25 12:26:56.025479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.602 [2024-11-25 12:26:56.025486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:02.602 [2024-11-25 12:26:56.025498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:3592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.602 [2024-11-25 12:26:56.025505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:02.602 [2024-11-25 12:26:56.025518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.602 [2024-11-25 12:26:56.025525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:02.602 [2024-11-25 12:26:56.025537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.602 [2024-11-25 12:26:56.025544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:02.602 [2024-11-25 12:26:56.025556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.602 [2024-11-25 12:26:56.025563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:02.602 [2024-11-25 12:26:56.025575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.602 [2024-11-25 12:26:56.025581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:02.602 [2024-11-25 12:26:56.025594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.602 [2024-11-25 12:26:56.025600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:02.602 [2024-11-25 12:26:56.025613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.602 [2024-11-25 12:26:56.025620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:02.602 [2024-11-25 12:26:56.025632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.602 [2024-11-25 12:26:56.025642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:02.602 [2024-11-25 12:26:56.025654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.602 [2024-11-25 12:26:56.025661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:02.602 [2024-11-25 12:26:56.025673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.602 [2024-11-25 12:26:56.025680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.602 [2024-11-25 12:26:56.025692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.602 [2024-11-25 12:26:56.025699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.602 [2024-11-25 12:26:56.025711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:3416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.602 [2024-11-25 12:26:56.025718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:02.602 [2024-11-25 12:26:56.025731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.602 [2024-11-25 12:26:56.025737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:02.602 [2024-11-25 12:26:56.025758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.602 [2024-11-25 12:26:56.025765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:02.602 [2024-11-25 12:26:56.025778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.602 [2024-11-25 12:26:56.025784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:02.602 [2024-11-25 12:26:56.025797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.602 [2024-11-25 12:26:56.025803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:02.602 [2024-11-25 12:26:56.025815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.602 [2024-11-25 12:26:56.025822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:02.602 [2024-11-25 12:26:56.025834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.602 [2024-11-25 12:26:56.025841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:02.602 [2024-11-25 12:26:56.025853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.602 [2024-11-25 12:26:56.025860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:02.602 [2024-11-25 12:26:56.025872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.602 [2024-11-25 12:26:56.025883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:02.602 [2024-11-25 12:26:56.025895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:3056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.602 [2024-11-25 12:26:56.025902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:02.602 [2024-11-25 12:26:56.025915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.602 [2024-11-25 12:26:56.025921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:02.602 [2024-11-25 12:26:56.025934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.602 [2024-11-25 12:26:56.025940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:02.602 [2024-11-25 12:26:56.025953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.602 [2024-11-25 12:26:56.025960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:02.602 [2024-11-25 12:26:56.025972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:2888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.602 [2024-11-25 12:26:56.025979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:02.602 [2024-11-25 12:26:56.025991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.603 [2024-11-25 12:26:56.025998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:02.603 [2024-11-25 12:26:56.027857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:3600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.603 [2024-11-25 12:26:56.027873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:02.603 [2024-11-25 12:26:56.027887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.603 [2024-11-25 12:26:56.027894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:02.603 [2024-11-25 12:26:56.027906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.603 [2024-11-25 12:26:56.027913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:02.603 [2024-11-25 12:26:56.027925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:3648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.603 [2024-11-25 12:26:56.027932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:02.603 [2024-11-25 12:26:56.027944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:3664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.603 [2024-11-25 12:26:56.027951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:02.603 [2024-11-25 12:26:56.027963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:3680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.603 [2024-11-25 12:26:56.027970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:02.603 [2024-11-25 12:26:56.027988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.603 [2024-11-25 12:26:56.027995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:02.603 [2024-11-25 12:26:56.028007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.603 [2024-11-25 12:26:56.028014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:02.603 [2024-11-25 12:26:56.028026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.603 [2024-11-25 12:26:56.028033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:02.603 [2024-11-25 12:26:56.028045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.603 [2024-11-25 12:26:56.028052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:02.603 [2024-11-25 12:26:56.028064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:3760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.603 [2024-11-25 12:26:56.028071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:02.603 [2024-11-25 12:26:56.028083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:3776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.603 [2024-11-25 12:26:56.028090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:02.603 [2024-11-25 12:26:56.028102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.603 [2024-11-25 12:26:56.028109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:02.603 [2024-11-25 12:26:56.028121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.603 [2024-11-25 12:26:56.028128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:02.603 [2024-11-25 12:26:56.028141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.603 [2024-11-25 12:26:56.028147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:02.603 [2024-11-25 12:26:56.028159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.603 [2024-11-25 12:26:56.028166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.603 [2024-11-25 12:26:56.028178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.603 [2024-11-25 12:26:56.028185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.603 [2024-11-25 12:26:56.028197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.603 [2024-11-25 12:26:56.028204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:02.603 [2024-11-25 12:26:56.028216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:3480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.603 [2024-11-25 12:26:56.028226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:02.603 [2024-11-25 12:26:56.028239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:3512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.603 [2024-11-25 12:26:56.028246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:02.603 [2024-11-25 12:26:56.028258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.603 [2024-11-25 12:26:56.028264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:02.603 [2024-11-25 12:26:56.028277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.603 [2024-11-25 12:26:56.028283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:02.603 [2024-11-25 12:26:56.028296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.603 [2024-11-25 12:26:56.028302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:02.603 [2024-11-25 12:26:56.028315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.603 [2024-11-25 12:26:56.028321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:02.603 [2024-11-25 12:26:56.028333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.603 [2024-11-25 12:26:56.028340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:02.603 [2024-11-25 12:26:56.028352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.603 [2024-11-25 12:26:56.028359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:02.603 [2024-11-25 12:26:56.028371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.603 [2024-11-25 12:26:56.028378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:02.603 [2024-11-25 12:26:56.028390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.603 [2024-11-25 12:26:56.028397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:02.603 [2024-11-25 12:26:56.028409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.603 [2024-11-25 12:26:56.028416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:02.603 [2024-11-25 12:26:56.028428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.603 [2024-11-25 12:26:56.028435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:02.603 [2024-11-25 12:26:56.028447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:3280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.603 [2024-11-25 12:26:56.028458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:02.603 [2024-11-25 12:26:56.028470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.603 [2024-11-25 12:26:56.028477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:02.603 [2024-11-25 12:26:56.028489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.603 [2024-11-25 12:26:56.028496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:02.603 [2024-11-25 12:26:56.028508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.603 [2024-11-25 12:26:56.028515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:02.603 [2024-11-25 12:26:56.028527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.603 [2024-11-25 12:26:56.028534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:02.603 [2024-11-25 12:26:56.028546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.603 [2024-11-25 12:26:56.028553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:02.603 [2024-11-25 12:26:56.028565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:3880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.603 [2024-11-25 12:26:56.028572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:02.603 [2024-11-25 12:26:56.028584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:3896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.603 [2024-11-25 12:26:56.028591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:02.603 [2024-11-25 12:26:56.028603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.604 [2024-11-25 12:26:56.028610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:02.604 [2024-11-25 12:26:56.028622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.604 [2024-11-25 12:26:56.028629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:02.604 [2024-11-25 12:26:56.028641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.604 [2024-11-25 12:26:56.028648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:02.604 [2024-11-25 12:26:56.028660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.604 [2024-11-25 12:26:56.028667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:02.604 [2024-11-25 12:26:56.028679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.604 [2024-11-25 12:26:56.028686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:02.604 [2024-11-25 12:26:56.028701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.604 [2024-11-25 12:26:56.028709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:02.604 [2024-11-25 12:26:56.028721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.604 [2024-11-25 12:26:56.028728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:02.604 [2024-11-25 12:26:56.028740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.604 [2024-11-25 12:26:56.028757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:02.604 [2024-11-25 12:26:56.029765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.604 [2024-11-25 12:26:56.029781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:02.604 [2024-11-25 12:26:56.029794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.604 [2024-11-25 12:26:56.029801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.604 [2024-11-25 12:26:56.029814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:3944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.604 [2024-11-25 12:26:56.029821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.604 [2024-11-25 12:26:56.029833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.604 [2024-11-25 12:26:56.029840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:02.604 [2024-11-25 12:26:56.029852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.604 [2024-11-25 12:26:56.029859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:02.604 [2024-11-25 12:26:56.029871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.604 [2024-11-25 12:26:56.029878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:02.604 [2024-11-25 12:26:56.029890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:4008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.604 [2024-11-25 12:26:56.029897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:02.604 [2024-11-25 12:26:56.029909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.604 [2024-11-25 12:26:56.029916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:02.604 [2024-11-25 12:26:56.029928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.604 [2024-11-25 12:26:56.029935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:02.604 [2024-11-25 12:26:56.029955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.604 [2024-11-25 12:26:56.029962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:02.604 [2024-11-25 12:26:56.029974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.604 [2024-11-25 12:26:56.029981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:02.604 [2024-11-25 12:26:56.029993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.604 [2024-11-25 12:26:56.030000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:02.604 [2024-11-25 12:26:56.030012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.604 [2024-11-25 12:26:56.030019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:02.604 [2024-11-25 12:26:56.030031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.604 [2024-11-25 12:26:56.030038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:02.604 [2024-11-25 12:26:56.030050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.604 [2024-11-25 12:26:56.030057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:02.604 [2024-11-25 12:26:56.030069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.604 [2024-11-25 12:26:56.030076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:02.604 [2024-11-25 12:26:56.030089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.604 [2024-11-25 12:26:56.030095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:02.604 [2024-11-25 12:26:56.030108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.604 [2024-11-25 12:26:56.030114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:02.604 [2024-11-25 12:26:56.030126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:3616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.604 [2024-11-25 12:26:56.030133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:02.604 [2024-11-25 12:26:56.030145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.604 [2024-11-25 12:26:56.030152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:02.604 [2024-11-25 12:26:56.030164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:3680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.604 [2024-11-25 12:26:56.030171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:02.604 [2024-11-25 12:26:56.030183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.604 [2024-11-25 12:26:56.030193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:02.604 [2024-11-25 12:26:56.030211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.604 [2024-11-25 12:26:56.030218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:02.604 [2024-11-25 12:26:56.030230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.604 [2024-11-25 12:26:56.030237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:02.604 [2024-11-25 12:26:56.030249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.604 [2024-11-25 12:26:56.030256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:02.604 [2024-11-25 12:26:56.030684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.604 [2024-11-25 12:26:56.030698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:02.604 [2024-11-25 12:26:56.030711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.604 [2024-11-25 12:26:56.030718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:02.604 [2024-11-25 12:26:56.030731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:3512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.604 [2024-11-25 12:26:56.030738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:02.604 [2024-11-25 12:26:56.030759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.604 [2024-11-25 12:26:56.030767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:02.605 [2024-11-25 12:26:56.030779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.605 [2024-11-25 12:26:56.030786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:02.605 [2024-11-25 12:26:56.030798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.605 [2024-11-25 12:26:56.030805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:02.605 [2024-11-25 12:26:56.030818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:3416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.605 [2024-11-25 12:26:56.030825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:02.605 [2024-11-25 12:26:56.030837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.605 [2024-11-25 12:26:56.030844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.605 [2024-11-25 12:26:56.030857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.605 [2024-11-25 12:26:56.030866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.605 [2024-11-25 12:26:56.030882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.605 [2024-11-25 12:26:56.030888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.605 [2024-11-25 12:26:56.030901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.605 [2024-11-25 12:26:56.030908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:02.605 [2024-11-25 12:26:56.030920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.605 [2024-11-25 12:26:56.030927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:02.605 [2024-11-25 12:26:56.030939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.605 [2024-11-25 12:26:56.030945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:02.605 [2024-11-25 12:26:56.030958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.605 [2024-11-25 12:26:56.030965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:02.605 [2024-11-25 12:26:56.030976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.605 [2024-11-25 12:26:56.030983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:02.605 [2024-11-25 12:26:56.030996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.605 [2024-11-25 12:26:56.031003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:02.605 [2024-11-25 12:26:56.031555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.605 [2024-11-25 12:26:56.031567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:02.605 [2024-11-25 12:26:56.031581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.605 [2024-11-25 12:26:56.031588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:02.605 [2024-11-25 12:26:56.031600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.605 [2024-11-25 12:26:56.031606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:02.605 [2024-11-25 12:26:56.031618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.605 [2024-11-25 12:26:56.031625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:02.605 [2024-11-25 12:26:56.031637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.605 [2024-11-25 12:26:56.031644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:02.605 [2024-11-25 12:26:56.031662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.605 [2024-11-25 12:26:56.031670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:02.605 [2024-11-25 12:26:56.031682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.605 [2024-11-25 12:26:56.031689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:02.605 [2024-11-25 12:26:56.031701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.605 [2024-11-25 12:26:56.031708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:02.605 [2024-11-25 12:26:56.031720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.605 [2024-11-25 12:26:56.031727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:02.605 [2024-11-25 12:26:56.031739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.605 [2024-11-25 12:26:56.031746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:02.605 [2024-11-25 12:26:56.031768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.605 [2024-11-25 12:26:56.031775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:02.605 [2024-11-25 12:26:56.031787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.605 [2024-11-25 12:26:56.031794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:02.605 [2024-11-25 12:26:56.031806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.605 [2024-11-25 12:26:56.031813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:02.605 [2024-11-25 12:26:56.031825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.605 [2024-11-25 12:26:56.031832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:02.605 [2024-11-25 12:26:56.031844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.605 [2024-11-25 12:26:56.031851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:02.605 [2024-11-25 12:26:56.031863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.605 [2024-11-25 12:26:56.031870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:02.605 [2024-11-25 12:26:56.031883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.605 [2024-11-25 12:26:56.031889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:02.605 [2024-11-25 12:26:56.031906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.605 [2024-11-25 12:26:56.031913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:02.605 [2024-11-25 12:26:56.031925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.605 [2024-11-25 12:26:56.031932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:02.605 [2024-11-25 12:26:56.031944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.605 [2024-11-25 12:26:56.031951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:02.605 [2024-11-25 12:26:56.031963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.605 [2024-11-25 12:26:56.031970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:02.606 [2024-11-25 12:26:56.031982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:3712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.606 [2024-11-25 12:26:56.031989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:02.606 [2024-11-25 12:26:56.032002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.606 [2024-11-25 12:26:56.032009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:02.606 [2024-11-25 12:26:56.032805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.606 [2024-11-25 12:26:56.032820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:02.606 [2024-11-25 12:26:56.032834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.606 [2024-11-25 12:26:56.032841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.606 [2024-11-25 12:26:56.032853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.606 [2024-11-25 12:26:56.032860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.606 [2024-11-25 12:26:56.032872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.606 [2024-11-25 12:26:56.032879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:02.606 [2024-11-25 12:26:56.032891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.606 [2024-11-25 12:26:56.032898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:02.606 [2024-11-25 12:26:56.032911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.606 [2024-11-25 12:26:56.032918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:02.606 [2024-11-25 12:26:56.032930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.606 [2024-11-25 12:26:56.032943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:02.606 [2024-11-25 12:26:56.032955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.606 [2024-11-25 12:26:56.032963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:02.606 [2024-11-25 12:26:56.032975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.606 [2024-11-25 12:26:56.032982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:02.606 [2024-11-25 12:26:56.032994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.606 [2024-11-25 12:26:56.033001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:02.606 [2024-11-25 12:26:56.033013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.606 [2024-11-25 12:26:56.033019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:02.606 [2024-11-25 12:26:56.033031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.606 [2024-11-25 12:26:56.033038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:02.606 [2024-11-25 12:26:56.033050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.606 [2024-11-25 12:26:56.033057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:02.606 [2024-11-25 12:26:56.033069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:4000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.606 [2024-11-25 12:26:56.033076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:02.606 [2024-11-25 12:26:56.033088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.606 [2024-11-25 12:26:56.033095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:02.606 [2024-11-25 12:26:56.033107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.606 [2024-11-25 12:26:56.033114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:02.606 [2024-11-25 12:26:56.033127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.606 [2024-11-25 12:26:56.033133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:02.606 [2024-11-25 12:26:56.033145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.606 [2024-11-25 12:26:56.033152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:02.606 [2024-11-25 12:26:56.033164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.606 [2024-11-25 12:26:56.033174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:02.606 [2024-11-25 12:26:56.033187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.606 [2024-11-25 12:26:56.033193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:02.606 [2024-11-25 12:26:56.033205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.606 [2024-11-25 12:26:56.033212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:02.606 [2024-11-25 12:26:56.033225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.606 [2024-11-25 12:26:56.033231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:02.606 [2024-11-25 12:26:56.033244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.606 [2024-11-25 12:26:56.033250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:02.606 [2024-11-25 12:26:56.033262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.606 [2024-11-25 12:26:56.033269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:02.606 [2024-11-25 12:26:56.033281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.606 [2024-11-25 12:26:56.033288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:02.606 [2024-11-25 12:26:56.033301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.606 [2024-11-25 12:26:56.033307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:02.606 [2024-11-25 12:26:56.033780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.606 [2024-11-25 12:26:56.033794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:02.606 [2024-11-25 12:26:56.033808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.606 [2024-11-25 12:26:56.033815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:02.606 [2024-11-25 12:26:56.033827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.606 [2024-11-25 12:26:56.033834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:02.606 [2024-11-25 12:26:56.033847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.606 [2024-11-25 12:26:56.033853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:02.606 [2024-11-25 12:26:56.033866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.606 [2024-11-25 12:26:56.033873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:02.606 [2024-11-25 12:26:56.034905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.606 [2024-11-25 12:26:56.034929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:02.606 [2024-11-25 12:26:56.034942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.606 [2024-11-25 12:26:56.034949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:02.606 [2024-11-25 12:26:56.034962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.606 [2024-11-25 12:26:56.034969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.606 [2024-11-25 12:26:56.034981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.606 [2024-11-25 12:26:56.034987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.606 [2024-11-25 12:26:56.034999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.606 [2024-11-25 12:26:56.035006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:02.606 [2024-11-25 12:26:56.035018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.607 [2024-11-25 12:26:56.035025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:02.607 [2024-11-25 12:26:56.035037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.607 [2024-11-25 12:26:56.035044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:02.607 [2024-11-25 12:26:56.035056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.607 [2024-11-25 12:26:56.035063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:02.607 [2024-11-25 12:26:56.035075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.607 [2024-11-25 12:26:56.035081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:02.607 [2024-11-25 12:26:56.035093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.607 [2024-11-25 12:26:56.035100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:02.607 [2024-11-25 12:26:56.035112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.607 [2024-11-25 12:26:56.035119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:02.607 [2024-11-25 12:26:56.035131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.607 [2024-11-25 12:26:56.035138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:02.607 [2024-11-25 12:26:56.035150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.607 [2024-11-25 12:26:56.035164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:02.607 [2024-11-25 12:26:56.035176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.607 [2024-11-25 12:26:56.035183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:02.607 [2024-11-25 12:26:56.035195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.607 [2024-11-25 12:26:56.035202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:02.607 [2024-11-25 12:26:56.035214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.607 [2024-11-25 12:26:56.035220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:02.607 [2024-11-25 12:26:56.035233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:3896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.607 [2024-11-25 12:26:56.035239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:02.607 [2024-11-25 12:26:56.035252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:3400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.607 [2024-11-25 12:26:56.035259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:02.607 [2024-11-25 12:26:56.035271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.607 [2024-11-25 12:26:56.035278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:02.607 [2024-11-25 12:26:56.035290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.607 [2024-11-25 12:26:56.035297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:02.607 [2024-11-25 12:26:56.035309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.607 [2024-11-25 12:26:56.035316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:02.607 [2024-11-25 12:26:56.035328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.607 [2024-11-25 12:26:56.035335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:02.607 [2024-11-25 12:26:56.035347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.607 [2024-11-25 12:26:56.035354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:02.607 [2024-11-25 12:26:56.035366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.607 [2024-11-25 12:26:56.035373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:02.607 [2024-11-25 12:26:56.035385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.607 [2024-11-25 12:26:56.035396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:02.607 [2024-11-25 12:26:56.035408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.607 [2024-11-25 12:26:56.035415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:02.607 [2024-11-25 12:26:56.035428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.607 [2024-11-25 12:26:56.035435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:02.607 [2024-11-25 12:26:56.035447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.607 [2024-11-25 12:26:56.035454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:02.607 [2024-11-25 12:26:56.035466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.607 [2024-11-25 12:26:56.035473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:02.607 [2024-11-25 12:26:56.035486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.607 [2024-11-25 12:26:56.035492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:02.607 [2024-11-25 12:26:56.035505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.607 [2024-11-25 12:26:56.035512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:02.607 [2024-11-25 12:26:56.035524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.607 [2024-11-25 12:26:56.035531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:02.607 [2024-11-25 12:26:56.036013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.607 [2024-11-25 12:26:56.036026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:02.607 [2024-11-25 12:26:56.036039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.607 [2024-11-25 12:26:56.036046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:02.607 [2024-11-25 12:26:56.036058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.607 [2024-11-25 12:26:56.036065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.607 [2024-11-25 12:26:56.036078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.607 [2024-11-25 12:26:56.036084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.607 [2024-11-25 12:26:56.036097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.607 [2024-11-25 12:26:56.036103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:02.607 [2024-11-25 12:26:56.036121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.607 [2024-11-25 12:26:56.036128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:02.607 [2024-11-25 12:26:56.036140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.607 [2024-11-25 12:26:56.036147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:02.607 [2024-11-25 12:26:56.036159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.607 [2024-11-25 12:26:56.036166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:02.607 [2024-11-25 12:26:56.036178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.607 [2024-11-25 12:26:56.036185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:02.607 [2024-11-25 12:26:56.036197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.607 [2024-11-25 12:26:56.036204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:02.607 [2024-11-25 12:26:56.036217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.607 [2024-11-25 12:26:56.036224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:02.607 [2024-11-25 12:26:56.036236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.607 [2024-11-25 12:26:56.036242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:02.607 [2024-11-25 12:26:56.036255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.607 [2024-11-25 12:26:56.036262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:02.608 [2024-11-25 12:26:56.037180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.608 [2024-11-25 12:26:56.037200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:02.608 [2024-11-25 12:26:56.037214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.608 [2024-11-25 12:26:56.037221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:02.608 [2024-11-25 12:26:56.037233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.608 [2024-11-25 12:26:56.037240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:02.608 [2024-11-25 12:26:56.037252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.608 [2024-11-25 12:26:56.037259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:02.608 [2024-11-25 12:26:56.037278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.608 [2024-11-25 12:26:56.037284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:02.608 [2024-11-25 12:26:56.037296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.608 [2024-11-25 12:26:56.037303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:02.608 [2024-11-25 12:26:56.037315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.608 [2024-11-25 12:26:56.037321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:02.608 [2024-11-25 12:26:56.037334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.608 [2024-11-25 12:26:56.037341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:02.608 [2024-11-25 12:26:56.037353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.608 [2024-11-25 12:26:56.037359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:02.608 [2024-11-25 12:26:56.037371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.608 [2024-11-25 12:26:56.037378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:02.608 [2024-11-25 12:26:56.037390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.608 [2024-11-25 12:26:56.037396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:02.608 [2024-11-25 12:26:56.037408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.608 [2024-11-25 12:26:56.037415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:02.608 [2024-11-25 12:26:56.037427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.608 [2024-11-25 12:26:56.037434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:02.608 [2024-11-25 12:26:56.037447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.608 [2024-11-25 12:26:56.037453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:02.608 [2024-11-25 12:26:56.037465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.608 [2024-11-25 12:26:56.037472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:02.608 [2024-11-25 12:26:56.037485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.608 [2024-11-25 12:26:56.037491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:02.608 [2024-11-25 12:26:56.037503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.608 [2024-11-25 12:26:56.037513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:02.608 [2024-11-25 12:26:56.037525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.608 [2024-11-25 12:26:56.037532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:02.608 [2024-11-25 12:26:56.037544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.608 [2024-11-25 12:26:56.037552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:02.608 [2024-11-25 12:26:56.037564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.608 [2024-11-25 12:26:56.037570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:02.608 [2024-11-25 12:26:56.037582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.608 [2024-11-25 12:26:56.037589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.608 [2024-11-25 12:26:56.037601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.608 [2024-11-25 12:26:56.037608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.608 [2024-11-25 12:26:56.037620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.608 [2024-11-25 12:26:56.037626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.608 [2024-11-25 12:26:56.037638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.608 [2024-11-25 12:26:56.037645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:02.608 [2024-11-25 12:26:56.037657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.608 [2024-11-25 12:26:56.037664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:02.608 [2024-11-25 12:26:56.037676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.608 [2024-11-25 12:26:56.037683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:02.608 [2024-11-25 12:26:56.037694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.608 [2024-11-25 12:26:56.037701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:02.608 [2024-11-25 12:26:56.037713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.608 [2024-11-25 12:26:56.037719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:02.608 [2024-11-25 12:26:56.037732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.608 [2024-11-25 12:26:56.037744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:02.608 [2024-11-25 12:26:56.037765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:4368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.608 [2024-11-25 12:26:56.037772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:02.608 [2024-11-25 12:26:56.037784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:4400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.608 [2024-11-25 12:26:56.037790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:02.608 [2024-11-25 12:26:56.037802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.608 [2024-11-25 12:26:56.037809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:02.608 [2024-11-25 12:26:56.037821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.608 [2024-11-25 12:26:56.037828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:02.608 [2024-11-25 12:26:56.038855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.608 [2024-11-25 12:26:56.038872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:02.608 [2024-11-25 12:26:56.038886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.608 [2024-11-25 12:26:56.038893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:02.608 [2024-11-25 12:26:56.038905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:4496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.608 [2024-11-25 12:26:56.038912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:02.608 [2024-11-25 12:26:56.038924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.608 [2024-11-25 12:26:56.038931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:02.608 [2024-11-25 12:26:56.038943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.608 [2024-11-25 12:26:56.038950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:02.608 [2024-11-25 12:26:56.038962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.608 [2024-11-25 12:26:56.038969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:02.609 [2024-11-25 12:26:56.038981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.609 [2024-11-25 12:26:56.038988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:02.609 [2024-11-25 12:26:56.039001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.609 [2024-11-25 12:26:56.039008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:02.609 [2024-11-25 12:26:56.039272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.609 [2024-11-25 12:26:56.039283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:02.609 [2024-11-25 12:26:56.039296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.609 [2024-11-25 12:26:56.039304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:02.609 [2024-11-25 12:26:56.039316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.609 [2024-11-25 12:26:56.039323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:02.609 [2024-11-25 12:26:56.039335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.609 [2024-11-25 12:26:56.039342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:02.609 [2024-11-25 12:26:56.039354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.609 [2024-11-25 12:26:56.039361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:02.609 [2024-11-25 12:26:56.039373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.609 [2024-11-25 12:26:56.039380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:02.609 [2024-11-25 12:26:56.039393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:3928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.609 [2024-11-25 12:26:56.039399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:02.609 [2024-11-25 12:26:56.039412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.609 [2024-11-25 12:26:56.039419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:02.609 [2024-11-25 12:26:56.039431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.609 [2024-11-25 12:26:56.039438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:02.609 [2024-11-25 12:26:56.039450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.609 [2024-11-25 12:26:56.039457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:02.609 [2024-11-25 12:26:56.039469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.609 [2024-11-25 12:26:56.039476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:02.609 [2024-11-25 12:26:56.039488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.609 [2024-11-25 12:26:56.039495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:02.609 [2024-11-25 12:26:56.039515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.609 [2024-11-25 12:26:56.039523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.609 [2024-11-25 12:26:56.039535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.609 [2024-11-25 12:26:56.039542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.609 [2024-11-25 12:26:56.039554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.609 [2024-11-25 12:26:56.039561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:02.609 [2024-11-25 12:26:56.039574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.609 [2024-11-25 12:26:56.039580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:02.609 [2024-11-25 12:26:56.039593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.609 [2024-11-25 12:26:56.039599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:02.609 [2024-11-25 12:26:56.039612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.609 [2024-11-25 12:26:56.039619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:02.609 [2024-11-25 12:26:56.039631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.609 [2024-11-25 12:26:56.039637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:02.609 [2024-11-25 12:26:56.039649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.609 [2024-11-25 12:26:56.039657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:02.609 [2024-11-25 12:26:56.039669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.609 [2024-11-25 12:26:56.039676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:02.609 [2024-11-25 12:26:56.039688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.609 [2024-11-25 12:26:56.039695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:02.609 [2024-11-25 12:26:56.039707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.609 [2024-11-25 12:26:56.039714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:02.609 [2024-11-25 12:26:56.039726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.609 [2024-11-25 12:26:56.039732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:02.609 [2024-11-25 12:26:56.039744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.609 [2024-11-25 12:26:56.039766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:02.609 [2024-11-25 12:26:56.039779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.609 [2024-11-25 12:26:56.039786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:02.609 [2024-11-25 12:26:56.039799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.609 [2024-11-25 12:26:56.039806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:02.609 [2024-11-25 12:26:56.039818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:4720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.609 [2024-11-25 12:26:56.039825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:02.609 [2024-11-25 12:26:56.039837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.609 [2024-11-25 12:26:56.039844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:02.609 [2024-11-25 12:26:56.039856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.609 [2024-11-25 12:26:56.039863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:02.609 [2024-11-25 12:26:56.040256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.609 [2024-11-25 12:26:56.040268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:02.609 [2024-11-25 12:26:56.040282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.609 [2024-11-25 12:26:56.040289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:02.609 [2024-11-25 12:26:56.040301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.609 [2024-11-25 12:26:56.040308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:02.609 [2024-11-25 12:26:56.040320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.609 [2024-11-25 12:26:56.040327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:02.609 [2024-11-25 12:26:56.040339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.609 [2024-11-25 12:26:56.040346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:02.609 [2024-11-25 12:26:56.040358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.609 [2024-11-25 12:26:56.040365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:02.610 [2024-11-25 12:26:56.040378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.610 [2024-11-25 12:26:56.040390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:02.610 [2024-11-25 12:26:56.040403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.610 [2024-11-25 12:26:56.040410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:02.610 [2024-11-25 12:26:56.041951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.610 [2024-11-25 12:26:56.041974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:02.610 [2024-11-25 12:26:56.041988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.610 [2024-11-25 12:26:56.041995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:02.610 [2024-11-25 12:26:56.042008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.610 [2024-11-25 12:26:56.042015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:02.610 [2024-11-25 12:26:56.042028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.610 [2024-11-25 12:26:56.042034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:02.610 [2024-11-25 12:26:56.042047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.610 [2024-11-25 12:26:56.042054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:02.610 [2024-11-25 12:26:56.042066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.610 [2024-11-25 12:26:56.042073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:02.610 [2024-11-25 12:26:56.042085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.610 [2024-11-25 12:26:56.042092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.610 [2024-11-25 12:26:56.042104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.610 [2024-11-25 12:26:56.042111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.610 [2024-11-25 12:26:56.042124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.610 [2024-11-25 12:26:56.042130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:02.610 [2024-11-25 12:26:56.042143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.610 [2024-11-25 12:26:56.042150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:02.610 [2024-11-25 12:26:56.042162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.610 [2024-11-25 12:26:56.042169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:02.610 [2024-11-25 12:26:56.042188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.610 [2024-11-25 12:26:56.042195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:02.610 [2024-11-25 12:26:56.042213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.610 [2024-11-25 12:26:56.042220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:02.610 [2024-11-25 12:26:56.042232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.610 [2024-11-25 12:26:56.042239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:02.610 [2024-11-25 12:26:56.042251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.610 [2024-11-25 12:26:56.042258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:02.610 [2024-11-25 12:26:56.042270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.610 [2024-11-25 12:26:56.042277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:02.610 [2024-11-25 12:26:56.042289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.610 [2024-11-25 12:26:56.042296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:02.610 [2024-11-25 12:26:56.042309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.610 [2024-11-25 12:26:56.042316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:02.610 [2024-11-25 12:26:56.042328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.610 [2024-11-25 12:26:56.042335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:02.610 [2024-11-25 12:26:56.042348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.610 [2024-11-25 12:26:56.042354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:02.610 [2024-11-25 12:26:56.042367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.610 [2024-11-25 12:26:56.042373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:02.610 [2024-11-25 12:26:56.042386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.610 [2024-11-25 12:26:56.042393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:02.610 [2024-11-25 12:26:56.042405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.610 [2024-11-25 12:26:56.042412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:02.610 [2024-11-25 12:26:56.042428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.610 [2024-11-25 12:26:56.042435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:02.610 [2024-11-25 12:26:56.042447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.610 [2024-11-25 12:26:56.042454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:02.610 [2024-11-25 12:26:56.042465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.610 [2024-11-25 12:26:56.042472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:02.610 [2024-11-25 12:26:56.042484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.610 [2024-11-25 12:26:56.042491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:02.610 [2024-11-25 12:26:56.042504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.610 [2024-11-25 12:26:56.042511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:02.610 [2024-11-25 12:26:56.042523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.610 [2024-11-25 12:26:56.042529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:02.610 [2024-11-25 12:26:56.042541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.610 [2024-11-25 12:26:56.042548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:02.610 [2024-11-25 12:26:56.042560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.610 [2024-11-25 12:26:56.042567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:02.610 [2024-11-25 12:26:56.042579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.610 [2024-11-25 12:26:56.042586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:02.610 [2024-11-25 12:26:56.042599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.610 [2024-11-25 12:26:56.042606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:02.610 [2024-11-25 12:26:56.042618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.611 [2024-11-25 12:26:56.042625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:02.611 [2024-11-25 12:26:56.042637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.611 [2024-11-25 12:26:56.042644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:02.611 [2024-11-25 12:26:56.042657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.611 [2024-11-25 12:26:56.042671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:02.611 [2024-11-25 12:26:56.044093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.611 [2024-11-25 12:26:56.044115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:02.611 [2024-11-25 12:26:56.044130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.611 [2024-11-25 12:26:56.044137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:02.611 [2024-11-25 12:26:56.044149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.611 [2024-11-25 12:26:56.044156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.611 [2024-11-25 12:26:56.044169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.611 [2024-11-25 12:26:56.044176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.611 [2024-11-25 12:26:56.044188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.611 [2024-11-25 12:26:56.044194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:02.611 [2024-11-25 12:26:56.044207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.611 [2024-11-25 12:26:56.044214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:02.611 [2024-11-25 12:26:56.044226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:4936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.611 [2024-11-25 12:26:56.044233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:02.611 [2024-11-25 12:26:56.044245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.611 [2024-11-25 12:26:56.044252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:02.611 [2024-11-25 12:26:56.044264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.611 [2024-11-25 12:26:56.044271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:02.611 [2024-11-25 12:26:56.044283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.611 [2024-11-25 12:26:56.044290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:02.611 [2024-11-25 12:26:56.044302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.611 [2024-11-25 12:26:56.044308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:02.611 [2024-11-25 12:26:56.044321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.611 [2024-11-25 12:26:56.044327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:02.611 [2024-11-25 12:26:56.044346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.611 [2024-11-25 12:26:56.044353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:02.611 [2024-11-25 12:26:56.044365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.611 [2024-11-25 12:26:56.044372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:02.611 [2024-11-25 12:26:56.044385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.611 [2024-11-25 12:26:56.044392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:02.611 [2024-11-25 12:26:56.044404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.611 [2024-11-25 12:26:56.044411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:02.611 [2024-11-25 12:26:56.044423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.611 [2024-11-25 12:26:56.044430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:02.611 [2024-11-25 12:26:56.044442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.611 [2024-11-25 12:26:56.044450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:02.611 [2024-11-25 12:26:56.044462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.611 [2024-11-25 12:26:56.044469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:02.611 [2024-11-25 12:26:56.044481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.611 [2024-11-25 12:26:56.044487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:02.611 [2024-11-25 12:26:56.044500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.611 [2024-11-25 12:26:56.044507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:02.611 [2024-11-25 12:26:56.044519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.611 [2024-11-25 12:26:56.044526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:02.611 [2024-11-25 12:26:56.044538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.611 [2024-11-25 12:26:56.044545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:02.611 [2024-11-25 12:26:56.044557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.611 [2024-11-25 12:26:56.044564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:02.611 [2024-11-25 12:26:56.044580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.611 [2024-11-25 12:26:56.044587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:02.611 [2024-11-25 12:26:56.044599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.611 [2024-11-25 12:26:56.044606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:02.611 [2024-11-25 12:26:56.044618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.611 [2024-11-25 12:26:56.044625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:02.611 [2024-11-25 12:26:56.044637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.611 [2024-11-25 12:26:56.044645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:02.611 [2024-11-25 12:26:56.044657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.611 [2024-11-25 12:26:56.044664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:02.611 [2024-11-25 12:26:56.044676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.611 [2024-11-25 12:26:56.044683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:02.611 [2024-11-25 12:26:56.044695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.611 [2024-11-25 12:26:56.044702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:02.611 [2024-11-25 12:26:56.044715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.611 [2024-11-25 12:26:56.044722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:02.611 [2024-11-25 12:26:56.044735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.611 [2024-11-25 12:26:56.044742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:02.611 [2024-11-25 12:26:56.044762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.611 [2024-11-25 12:26:56.044769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.611 [2024-11-25 12:26:56.044782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.611 [2024-11-25 12:26:56.044789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.611 [2024-11-25 12:26:56.044801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.611 [2024-11-25 12:26:56.044808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.611 [2024-11-25 12:26:56.046276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.611 [2024-11-25 12:26:56.046305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:02.612 [2024-11-25 12:26:56.046320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:4736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.612 [2024-11-25 12:26:56.046327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:02.612 [2024-11-25 12:26:56.046339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.612 [2024-11-25 12:26:56.046346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:02.612 [2024-11-25 12:26:56.046359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.612 [2024-11-25 12:26:56.046366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:02.612 [2024-11-25 12:26:56.046378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.612 [2024-11-25 12:26:56.046385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:02.612 [2024-11-25 12:26:56.046397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.612 [2024-11-25 12:26:56.046404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:02.612 [2024-11-25 12:26:56.046416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.612 [2024-11-25 12:26:56.046423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:02.612 [2024-11-25 12:26:56.046435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.612 [2024-11-25 12:26:56.046442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:02.612 [2024-11-25 12:26:56.046454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.612 [2024-11-25 12:26:56.046461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:02.612 [2024-11-25 12:26:56.046474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.612 [2024-11-25 12:26:56.046480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:02.612 [2024-11-25 12:26:56.046493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.612 [2024-11-25 12:26:56.046499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:02.612 [2024-11-25 12:26:56.046512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.612 [2024-11-25 12:26:56.046518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:02.612 [2024-11-25 12:26:56.046531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.612 [2024-11-25 12:26:56.046542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:02.612 [2024-11-25 12:26:56.046554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.612 [2024-11-25 12:26:56.046561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:02.612 [2024-11-25 12:26:56.046573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.612 [2024-11-25 12:26:56.046580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:02.612 [2024-11-25 12:26:56.046593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.612 [2024-11-25 12:26:56.046599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:02.612 [2024-11-25 12:26:56.046612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.612 [2024-11-25 12:26:56.046619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:02.612 [2024-11-25 12:26:56.046631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.612 [2024-11-25 12:26:56.046638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:02.612 [2024-11-25 12:26:56.046650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:4920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.612 [2024-11-25 12:26:56.046657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:02.612 [2024-11-25 12:26:56.046669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.612 [2024-11-25 12:26:56.046676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:02.612 [2024-11-25 12:26:56.046689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.612 [2024-11-25 12:26:56.046696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:02.612 [2024-11-25 12:26:56.046708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.612 [2024-11-25 12:26:56.046715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:02.612 [2024-11-25 12:26:56.046727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.612 [2024-11-25 12:26:56.046734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:02.612 [2024-11-25 12:26:56.046746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.612 [2024-11-25 12:26:56.046762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:02.612 [2024-11-25 12:26:56.046775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.612 [2024-11-25 12:26:56.046783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:02.612 [2024-11-25 12:26:56.046799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.612 [2024-11-25 12:26:56.046807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:02.612 [2024-11-25 12:26:56.046819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.612 [2024-11-25 12:26:56.046826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:02.612 [2024-11-25 12:26:56.046838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.612 [2024-11-25 12:26:56.046845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:02.612 [2024-11-25 12:26:56.046858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.612 [2024-11-25 12:26:56.046865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:02.612 [2024-11-25 12:26:56.046877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.612 [2024-11-25 12:26:56.046884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:02.612 [2024-11-25 12:26:56.046896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.612 [2024-11-25 12:26:56.046903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.612 [2024-11-25 12:26:56.046916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.612 [2024-11-25 12:26:56.046923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.612 [2024-11-25 12:26:56.046935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.612 [2024-11-25 12:26:56.046942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:02.612 [2024-11-25 12:26:56.046955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.612 [2024-11-25 12:26:56.046962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:28:02.612 [2024-11-25 12:26:56.047382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.612 [2024-11-25 12:26:56.047396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:28:02.612 [2024-11-25 12:26:56.047409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.612 [2024-11-25 12:26:56.047416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:28:02.612 [2024-11-25 12:26:56.047428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.612 [2024-11-25 12:26:56.047435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:28:02.612 [2024-11-25 12:26:56.047453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.612 [2024-11-25 12:26:56.047460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:28:02.612 [2024-11-25 12:26:56.047471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.612 [2024-11-25 12:26:56.047478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:28:02.612 [2024-11-25 12:26:56.047490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.613 [2024-11-25 12:26:56.047497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:28:02.613 [2024-11-25 12:26:56.047509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.613 [2024-11-25 12:26:56.047515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:28:02.613 [2024-11-25 12:26:56.047527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.613 [2024-11-25 12:26:56.047534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:28:02.613 [2024-11-25 12:26:56.047546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.613 [2024-11-25 12:26:56.047552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:28:02.613 [2024-11-25 12:26:56.047564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.613 [2024-11-25 12:26:56.047571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:28:02.613 [2024-11-25 12:26:56.047583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.613 [2024-11-25 12:26:56.047590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:28:02.613 [2024-11-25 12:26:56.047602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.613 [2024-11-25 12:26:56.047608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:28:02.613 [2024-11-25 12:26:56.047620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.613 [2024-11-25 12:26:56.047627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:28:02.613 [2024-11-25 12:26:56.047639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.613 [2024-11-25 12:26:56.047646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:28:02.613 [2024-11-25 12:26:56.047657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.613 [2024-11-25 12:26:56.047664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:28:02.613 [2024-11-25 12:26:56.047676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.613 [2024-11-25 12:26:56.047686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:28:02.613 [2024-11-25 12:26:56.047699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.613 [2024-11-25 12:26:56.047705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:28:02.613 [2024-11-25 12:26:56.048130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.613 [2024-11-25 12:26:56.048142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:28:02.613 [2024-11-25 12:26:56.048155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.613 [2024-11-25 12:26:56.048162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:28:02.613 [2024-11-25 12:26:56.048174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.613 [2024-11-25 12:26:56.048180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:28:02.613 [2024-11-25 12:26:56.048192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.613 [2024-11-25 12:26:56.048199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:28:02.613 [2024-11-25 12:26:56.048211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.613 [2024-11-25 12:26:56.048218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:28:02.613 [2024-11-25 12:26:56.048229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.613 [2024-11-25 12:26:56.048236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:28:02.613 [2024-11-25 12:26:56.048248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.613 [2024-11-25 12:26:56.048254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:28:02.613 [2024-11-25 12:26:56.048266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.613 [2024-11-25 12:26:56.048273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:28:02.613 [2024-11-25 12:26:56.048285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.613 [2024-11-25 12:26:56.048292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:28:02.613 [2024-11-25 12:26:56.048303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.613 [2024-11-25 12:26:56.048310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:28:02.613 [2024-11-25 12:26:56.048322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:4888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.613 [2024-11-25 12:26:56.048334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:28:02.613 [2024-11-25 12:26:56.048346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.613 [2024-11-25 12:26:56.048352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.613 [2024-11-25 12:26:56.048364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.613 [2024-11-25 12:26:56.048371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:02.613 [2024-11-25 12:26:56.048383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.613 [2024-11-25 12:26:56.048390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:02.613 [2024-11-25 12:26:56.048401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.613 [2024-11-25 12:26:56.048408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:02.613 [2024-11-25 12:26:56.048419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.613 [2024-11-25 12:26:56.048426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:02.613 [2024-11-25 12:26:56.048438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.613 [2024-11-25 12:26:56.048445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:02.613 [2024-11-25 12:26:56.048456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:4584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.613 [2024-11-25 12:26:56.048464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:02.613 [2024-11-25 12:26:56.048476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.613 [2024-11-25 12:26:56.048482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:02.613 [2024-11-25 12:26:56.048814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.613 [2024-11-25 12:26:56.048827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:02.613 [2024-11-25 12:26:56.048840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.613 [2024-11-25 12:26:56.048848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:02.613 [2024-11-25 12:26:56.048859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.613 [2024-11-25 12:26:56.048866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:02.613 [2024-11-25 12:26:56.048878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.613 [2024-11-25 12:26:56.048885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:02.613 [2024-11-25 12:26:56.048903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.613 [2024-11-25 12:26:56.048910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:02.613 [2024-11-25 12:26:56.048922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.613 [2024-11-25 12:26:56.048929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:02.613 [2024-11-25 12:26:56.048940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.613 [2024-11-25 12:26:56.048947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:02.613 [2024-11-25 12:26:56.048959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.613 [2024-11-25 12:26:56.048966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:02.613 [2024-11-25 12:26:56.048978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.614 [2024-11-25 12:26:56.048984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:02.614 [2024-11-25 12:26:56.048997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.614 [2024-11-25 12:26:56.049004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:02.614 [2024-11-25 12:26:56.050717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.614 [2024-11-25 12:26:56.050741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:02.614 [2024-11-25 12:26:56.050764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.614 [2024-11-25 12:26:56.050771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:28:02.614 [2024-11-25 12:26:56.050784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:5088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.614 [2024-11-25 12:26:56.050791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:02.614 [2024-11-25 12:26:56.050802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.614 [2024-11-25 12:26:56.050809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:02.614 [2024-11-25 12:26:56.050821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.614 [2024-11-25 12:26:56.050828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:02.614 [2024-11-25 12:26:56.050839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.614 [2024-11-25 12:26:56.050846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:02.614 [2024-11-25 12:26:56.050858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.614 [2024-11-25 12:26:56.050873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:02.614 [2024-11-25 12:26:56.050885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.614 [2024-11-25 12:26:56.050893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:02.614 [2024-11-25 12:26:56.050905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:4720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.614 [2024-11-25 12:26:56.050912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:28:02.614 [2024-11-25 12:26:56.050924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.614 [2024-11-25 12:26:56.050931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:28:02.614 [2024-11-25 12:26:56.050943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.614 [2024-11-25 12:26:56.050950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:28:02.614 [2024-11-25 12:26:56.050962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.614 [2024-11-25 12:26:56.050969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:28:02.614 [2024-11-25 12:26:56.050982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.614 [2024-11-25 12:26:56.050989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:28:02.614 [2024-11-25 12:26:56.051001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.614 [2024-11-25 12:26:56.051008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:28:02.614 [2024-11-25 12:26:56.051020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.614 [2024-11-25 12:26:56.051027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.614 [2024-11-25 12:26:56.051039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.614 [2024-11-25 12:26:56.051046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:02.614 [2024-11-25 12:26:56.051058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.614 [2024-11-25 12:26:56.051065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:28:02.614 [2024-11-25 12:26:56.051077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.614 [2024-11-25 12:26:56.051084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:28:02.614 [2024-11-25 12:26:56.051096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.614 [2024-11-25 12:26:56.051107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:28:02.614 [2024-11-25 12:26:56.051119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:4832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.614 [2024-11-25 12:26:56.051126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:28:02.614 [2024-11-25 12:26:56.051139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.614 [2024-11-25 12:26:56.051146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:28:02.614 [2024-11-25 12:26:56.051158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.614 [2024-11-25 12:26:56.051165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:28:02.614 [2024-11-25 12:26:56.051176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.614 [2024-11-25 12:26:56.051183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:28:02.614 [2024-11-25 12:26:56.051196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.614 [2024-11-25 12:26:56.051202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:28:02.614 [2024-11-25 12:26:56.051215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.614 [2024-11-25 12:26:56.051221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:28:02.614 [2024-11-25 12:26:56.051234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.614 [2024-11-25 12:26:56.051241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:28:02.614 [2024-11-25 12:26:56.051253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.614 [2024-11-25 12:26:56.051260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:28:02.614 [2024-11-25 12:26:56.051272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.614 [2024-11-25 12:26:56.051279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:28:02.614 [2024-11-25 12:26:56.051291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.614 [2024-11-25 12:26:56.051298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:28:02.614 [2024-11-25 12:26:56.051310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.614 [2024-11-25 12:26:56.051317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:28:02.614 [2024-11-25 12:26:56.051329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.614 [2024-11-25 12:26:56.051336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:02.614 [2024-11-25 12:26:56.051352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.614 [2024-11-25 12:26:56.051359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:02.614 [2024-11-25 12:26:56.051371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.614 [2024-11-25 12:26:56.051378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:02.615 [2024-11-25 12:26:56.051390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.615 [2024-11-25 12:26:56.051397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:02.615 [2024-11-25 12:26:56.051409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:5456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.615 [2024-11-25 12:26:56.051416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:28:02.615 [2024-11-25 12:26:56.051428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.615 [2024-11-25 12:26:56.051435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:28:02.615 [2024-11-25 12:26:56.051447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.615 [2024-11-25 12:26:56.051455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:28:02.615 [2024-11-25 12:26:56.052276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.615 [2024-11-25 12:26:56.052292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:28:02.615 [2024-11-25 12:26:56.052306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.615 [2024-11-25 12:26:56.052313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:28:02.615 [2024-11-25 12:26:56.052325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.615 [2024-11-25 12:26:56.052332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:02.615 [2024-11-25 12:26:56.052344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.615 [2024-11-25 12:26:56.052351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:02.615 [2024-11-25 12:26:56.052363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.615 [2024-11-25 12:26:56.052370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:02.615 [2024-11-25 12:26:56.052383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.615 [2024-11-25 12:26:56.052389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:02.615 [2024-11-25 12:26:56.052408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.615 [2024-11-25 12:26:56.052415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:02.615 [2024-11-25 12:26:56.052427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.615 [2024-11-25 12:26:56.052434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:02.615 [2024-11-25 12:26:56.052447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.615 [2024-11-25 12:26:56.052454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:02.615 [2024-11-25 12:26:56.052466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.615 [2024-11-25 12:26:56.052472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.615 [2024-11-25 12:26:56.052485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:5608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.615 [2024-11-25 12:26:56.052491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:28:02.615 [2024-11-25 12:26:56.052503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:5624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.615 [2024-11-25 12:26:56.052510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:02.615 [2024-11-25 12:26:56.052522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.615 [2024-11-25 12:26:56.052529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:02.615 [2024-11-25 12:26:56.052541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.615 [2024-11-25 12:26:56.052548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:28:02.615 [2024-11-25 12:26:56.052560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.615 [2024-11-25 12:26:56.052567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:28:02.615 [2024-11-25 12:26:56.052579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.615 [2024-11-25 12:26:56.052586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:28:02.615 [2024-11-25 12:26:56.052598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:4432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.615 [2024-11-25 12:26:56.052605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:02.615 [2024-11-25 12:26:56.052617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.615 [2024-11-25 12:26:56.052624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:28:02.615 [2024-11-25 12:26:56.052636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.615 [2024-11-25 12:26:56.052646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:02.615 [2024-11-25 12:26:56.052658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.615 [2024-11-25 12:26:56.052665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:02.615 [2024-11-25 12:26:56.052678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.615 [2024-11-25 12:26:56.052684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:02.615 [2024-11-25 12:26:56.052696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.615 [2024-11-25 12:26:56.052703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:02.615 [2024-11-25 12:26:56.053102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.615 [2024-11-25 12:26:56.053117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:02.615 [2024-11-25 12:26:56.053139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:4952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.615 [2024-11-25 12:26:56.053147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:02.615 [2024-11-25 12:26:56.053159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.615 [2024-11-25 12:26:56.053166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:02.615 [2024-11-25 12:26:56.053179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.615 [2024-11-25 12:26:56.053185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:28:02.615 [2024-11-25 12:26:56.053198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.615 [2024-11-25 12:26:56.053204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:02.615 [2024-11-25 12:26:56.053216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.615 [2024-11-25 12:26:56.053223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:28:02.615 [2024-11-25 12:26:56.053235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.615 [2024-11-25 12:26:56.053241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:28:02.615 [2024-11-25 12:26:56.053254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.615 [2024-11-25 12:26:56.053261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:28:02.615 [2024-11-25 12:26:56.053273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.615 [2024-11-25 12:26:56.053287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:28:02.615 [2024-11-25 12:26:56.053300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.615 [2024-11-25 12:26:56.053307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:02.615 [2024-11-25 12:26:56.053319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.615 [2024-11-25 12:26:56.053326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:02.615 [2024-11-25 12:26:56.053338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.615 [2024-11-25 12:26:56.053344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:02.615 [2024-11-25 12:26:56.053357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.615 [2024-11-25 12:26:56.053364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:28:02.616 [2024-11-25 12:26:56.053811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.616 [2024-11-25 12:26:56.053824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:28:02.616 [2024-11-25 12:26:56.053838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.616 [2024-11-25 12:26:56.053845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:28:02.616 [2024-11-25 12:26:56.053857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.616 [2024-11-25 12:26:56.053864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:28:02.616 [2024-11-25 12:26:56.053876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.616 [2024-11-25 12:26:56.053883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:28:02.616 [2024-11-25 12:26:56.053895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.616 [2024-11-25 12:26:56.053902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:28:02.616 [2024-11-25 12:26:56.053914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.616 [2024-11-25 12:26:56.053921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:28:02.616 [2024-11-25 12:26:56.053934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.616 [2024-11-25 12:26:56.053941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.616 [2024-11-25 12:26:56.053953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.616 [2024-11-25 12:26:56.053960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:28:02.616 [2024-11-25 12:26:56.053979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:5784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:02.616 [2024-11-25 12:26:56.053986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:28:02.616 12380.30 IOPS, 48.36 MiB/s [2024-11-25T12:26:58.492Z] 12401.29 IOPS, 48.44 MiB/s [2024-11-25T12:26:58.492Z] Received shutdown signal, test time was about 28.744411 seconds 00:28:02.616 00:28:02.616 Latency(us) 00:28:02.616 [2024-11-25T12:26:58.492Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:02.616 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:28:02.616 Verification LBA range: start 0x0 length 0x4000 00:28:02.616 Nvme0n1 : 28.74 12412.50 48.49 0.00 0.00 10290.73 354.46 3019898.88 00:28:02.616 [2024-11-25T12:26:58.492Z] =================================================================================================================== 00:28:02.616 [2024-11-25T12:26:58.492Z] Total : 12412.50 48.49 0.00 0.00 10290.73 354.46 3019898.88 00:28:02.616 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:02.874 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:28:02.874 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:28:02.874 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:28:02.874 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:02.874 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:28:02.874 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:02.874 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:28:02.874 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:02.874 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:02.875 rmmod nvme_tcp 00:28:02.875 rmmod nvme_fabrics 00:28:02.875 rmmod nvme_keyring 00:28:02.875 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:02.875 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:28:02.875 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:28:02.875 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 88988 ']' 00:28:02.875 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 88988 00:28:02.875 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 88988 ']' 00:28:02.875 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 88988 00:28:02.875 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:28:02.875 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:02.875 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88988 00:28:02.875 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:02.875 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:02.875 killing process with pid 88988 00:28:02.875 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88988' 00:28:02.875 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 88988 00:28:02.875 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 88988 00:28:03.133 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:03.133 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:03.133 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:03.133 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:28:03.133 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:03.133 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:28:03.133 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:28:03.133 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:03.133 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:03.133 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:03.133 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:03.133 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:03.133 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:03.133 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:03.133 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:03.133 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:03.133 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:03.133 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:03.133 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:03.133 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:03.133 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:03.133 12:26:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:03.133 12:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:03.133 12:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:03.133 12:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:03.133 12:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:03.393 12:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:28:03.393 00:28:03.393 real 0m34.033s 00:28:03.393 user 1m50.640s 00:28:03.393 sys 0m6.270s 00:28:03.393 12:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:03.393 12:26:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:28:03.393 ************************************ 00:28:03.393 END TEST nvmf_host_multipath_status 00:28:03.393 ************************************ 00:28:03.393 12:26:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:03.393 12:26:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:03.393 12:26:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:03.393 12:26:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.393 ************************************ 00:28:03.393 START TEST nvmf_discovery_remove_ifc 00:28:03.393 ************************************ 00:28:03.393 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:03.393 * Looking for test storage... 00:28:03.393 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:28:03.393 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:03.393 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:28:03.393 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:03.393 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:03.393 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:03.393 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:03.393 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:03.393 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:28:03.393 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:28:03.393 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:28:03.393 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:28:03.393 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:28:03.393 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:28:03.393 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:28:03.393 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:03.393 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:28:03.393 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:28:03.393 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:03.393 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:03.393 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:28:03.393 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:28:03.393 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:03.393 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:28:03.393 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:28:03.393 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:28:03.393 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:28:03.393 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:03.393 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:28:03.393 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:28:03.393 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:03.393 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:03.393 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:28:03.393 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:03.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.394 --rc genhtml_branch_coverage=1 00:28:03.394 --rc genhtml_function_coverage=1 00:28:03.394 --rc genhtml_legend=1 00:28:03.394 --rc geninfo_all_blocks=1 00:28:03.394 --rc geninfo_unexecuted_blocks=1 00:28:03.394 00:28:03.394 ' 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:03.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.394 --rc genhtml_branch_coverage=1 00:28:03.394 --rc genhtml_function_coverage=1 00:28:03.394 --rc genhtml_legend=1 00:28:03.394 --rc geninfo_all_blocks=1 00:28:03.394 --rc geninfo_unexecuted_blocks=1 00:28:03.394 00:28:03.394 ' 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:03.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.394 --rc genhtml_branch_coverage=1 00:28:03.394 --rc genhtml_function_coverage=1 00:28:03.394 --rc genhtml_legend=1 00:28:03.394 --rc geninfo_all_blocks=1 00:28:03.394 --rc geninfo_unexecuted_blocks=1 00:28:03.394 00:28:03.394 ' 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:03.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:03.394 --rc genhtml_branch_coverage=1 00:28:03.394 --rc genhtml_function_coverage=1 00:28:03.394 --rc genhtml_legend=1 00:28:03.394 --rc geninfo_all_blocks=1 00:28:03.394 --rc geninfo_unexecuted_blocks=1 00:28:03.394 00:28:03.394 ' 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:03.394 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:28:03.394 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:03.395 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:03.395 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:03.395 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:03.395 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:03.395 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:03.395 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:28:03.395 Cannot find device "nvmf_init_br" 00:28:03.395 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:28:03.395 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:28:03.395 Cannot find device "nvmf_init_br2" 00:28:03.395 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:28:03.395 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:28:03.395 Cannot find device "nvmf_tgt_br" 00:28:03.395 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:28:03.395 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:28:03.395 Cannot find device "nvmf_tgt_br2" 00:28:03.395 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:28:03.395 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:28:03.395 Cannot find device "nvmf_init_br" 00:28:03.395 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:28:03.395 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:28:03.653 Cannot find device "nvmf_init_br2" 00:28:03.653 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:28:03.653 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:28:03.653 Cannot find device "nvmf_tgt_br" 00:28:03.653 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:28:03.653 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:28:03.653 Cannot find device "nvmf_tgt_br2" 00:28:03.653 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:28:03.653 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:28:03.653 Cannot find device "nvmf_br" 00:28:03.653 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:28:03.653 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:28:03.653 Cannot find device "nvmf_init_if" 00:28:03.653 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:28:03.653 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:28:03.653 Cannot find device "nvmf_init_if2" 00:28:03.653 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:28:03.653 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:03.653 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:03.653 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:28:03.653 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:03.653 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:03.653 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:28:03.653 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:28:03.653 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:03.653 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:28:03.653 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:03.653 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:03.653 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:03.653 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:03.653 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:03.653 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:28:03.653 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:28:03.653 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:28:03.653 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:28:03.653 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:28:03.653 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:28:03.653 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:28:03.653 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:28:03.654 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:28:03.654 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:03.654 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:03.654 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:03.654 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:28:03.654 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:28:03.654 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:28:03.654 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:28:03.654 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:03.654 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:03.654 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:03.654 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:28:03.654 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:28:03.654 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:28:03.654 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:03.654 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:28:03.654 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:28:03.654 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:03.654 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:28:03.654 00:28:03.654 --- 10.0.0.3 ping statistics --- 00:28:03.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:03.654 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:28:03.654 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:28:03.654 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:28:03.654 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:28:03.654 00:28:03.654 --- 10.0.0.4 ping statistics --- 00:28:03.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:03.654 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:28:03.654 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:03.654 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:03.654 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:28:03.654 00:28:03.654 --- 10.0.0.1 ping statistics --- 00:28:03.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:03.654 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:28:03.654 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:28:03.654 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:03.654 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.033 ms 00:28:03.654 00:28:03.654 --- 10.0.0.2 ping statistics --- 00:28:03.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:03.654 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:28:03.654 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:03.654 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:28:03.654 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:03.654 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:03.654 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:03.654 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:03.654 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:03.654 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:03.654 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:03.912 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:28:03.912 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:03.912 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:03.912 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:03.912 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=90401 00:28:03.912 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 90401 00:28:03.912 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:03.912 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 90401 ']' 00:28:03.912 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:03.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:03.912 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:03.912 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:03.912 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:03.912 12:26:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:03.912 [2024-11-25 12:26:59.572545] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:28:03.912 [2024-11-25 12:26:59.572601] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:03.912 [2024-11-25 12:26:59.706950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:03.912 [2024-11-25 12:26:59.741358] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:03.912 [2024-11-25 12:26:59.741396] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:03.912 [2024-11-25 12:26:59.741402] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:03.912 [2024-11-25 12:26:59.741407] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:03.912 [2024-11-25 12:26:59.741412] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:03.912 [2024-11-25 12:26:59.741666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:04.847 12:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:04.847 12:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:28:04.847 12:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:04.847 12:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:04.847 12:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:04.847 12:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:04.847 12:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:28:04.847 12:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.847 12:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:04.847 [2024-11-25 12:27:00.470452] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:04.847 [2024-11-25 12:27:00.478539] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:28:04.847 null0 00:28:04.847 [2024-11-25 12:27:00.510487] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:04.847 12:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.847 12:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=90451 00:28:04.847 12:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 90451 /tmp/host.sock 00:28:04.847 12:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 90451 ']' 00:28:04.847 12:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:28:04.847 12:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:28:04.847 12:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:04.847 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:04.847 12:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:04.847 12:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:04.847 12:27:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:04.847 [2024-11-25 12:27:00.569401] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:28:04.847 [2024-11-25 12:27:00.569463] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90451 ] 00:28:04.847 [2024-11-25 12:27:00.709392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.105 [2024-11-25 12:27:00.745438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:05.713 12:27:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:05.713 12:27:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:28:05.713 12:27:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:05.713 12:27:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:28:05.713 12:27:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.713 12:27:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:05.713 12:27:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.713 12:27:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:28:05.713 12:27:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.713 12:27:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:05.713 12:27:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.713 12:27:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:28:05.713 12:27:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.714 12:27:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:07.087 [2024-11-25 12:27:02.532968] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:28:07.087 [2024-11-25 12:27:02.532995] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:28:07.087 [2024-11-25 12:27:02.533007] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:28:07.087 [2024-11-25 12:27:02.619090] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:28:07.087 [2024-11-25 12:27:02.674442] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:28:07.087 [2024-11-25 12:27:02.675030] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x9831f0:1 started. 00:28:07.087 [2024-11-25 12:27:02.676468] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:07.087 [2024-11-25 12:27:02.676518] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:07.087 [2024-11-25 12:27:02.676537] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:07.087 [2024-11-25 12:27:02.676548] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:28:07.087 [2024-11-25 12:27:02.676567] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:28:07.087 12:27:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.087 12:27:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:28:07.087 12:27:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:07.087 12:27:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:07.087 [2024-11-25 12:27:02.681682] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x9831f0 was disconnected and freed. delete nvme_qpair. 00:28:07.087 12:27:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:07.087 12:27:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:07.087 12:27:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:07.087 12:27:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.087 12:27:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:07.087 12:27:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.087 12:27:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:28:07.087 12:27:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:28:07.087 12:27:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:28:07.087 12:27:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:28:07.087 12:27:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:07.087 12:27:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:07.087 12:27:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.087 12:27:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:07.087 12:27:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:07.087 12:27:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:07.087 12:27:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:07.087 12:27:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.087 12:27:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:07.087 12:27:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:08.021 12:27:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:08.021 12:27:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:08.021 12:27:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:08.021 12:27:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:08.021 12:27:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.021 12:27:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:08.021 12:27:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:08.021 12:27:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.021 12:27:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:08.021 12:27:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:08.957 12:27:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:08.957 12:27:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:08.957 12:27:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:08.957 12:27:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.957 12:27:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:08.957 12:27:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:08.957 12:27:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:08.957 12:27:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.214 12:27:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:09.214 12:27:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:10.149 12:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:10.149 12:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:10.149 12:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:10.149 12:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:10.149 12:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:10.149 12:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:10.149 12:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:10.149 12:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:10.149 12:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:10.149 12:27:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:11.083 12:27:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:11.083 12:27:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:11.083 12:27:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:11.083 12:27:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.083 12:27:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:11.083 12:27:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:11.083 12:27:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:11.083 12:27:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.083 12:27:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:11.083 12:27:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:12.458 12:27:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:12.458 12:27:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:12.459 12:27:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.459 12:27:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:12.459 12:27:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:12.459 12:27:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:12.459 12:27:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:12.459 12:27:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.459 12:27:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:12.459 12:27:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:12.459 [2024-11-25 12:27:08.104878] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:28:12.459 [2024-11-25 12:27:08.104920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.459 [2024-11-25 12:27:08.104929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.459 [2024-11-25 12:27:08.104938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.459 [2024-11-25 12:27:08.104943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.459 [2024-11-25 12:27:08.104948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.459 [2024-11-25 12:27:08.104953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.459 [2024-11-25 12:27:08.104958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.459 [2024-11-25 12:27:08.104962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.459 [2024-11-25 12:27:08.104968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:12.459 [2024-11-25 12:27:08.104972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:12.459 [2024-11-25 12:27:08.104977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x960520 is same with the state(6) to be set 00:28:12.459 [2024-11-25 12:27:08.114876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x960520 (9): Bad file descriptor 00:28:12.459 [2024-11-25 12:27:08.124886] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:28:12.459 [2024-11-25 12:27:08.124899] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:28:12.459 [2024-11-25 12:27:08.124904] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:12.459 [2024-11-25 12:27:08.124907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:12.459 [2024-11-25 12:27:08.124929] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:13.393 12:27:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:13.393 12:27:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:13.393 12:27:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:13.393 12:27:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:13.393 12:27:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:13.393 12:27:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.393 12:27:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:13.393 [2024-11-25 12:27:09.159820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:28:13.393 [2024-11-25 12:27:09.159929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x960520 with addr=10.0.0.3, port=4420 00:28:13.393 [2024-11-25 12:27:09.159961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x960520 is same with the state(6) to be set 00:28:13.393 [2024-11-25 12:27:09.160023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x960520 (9): Bad file descriptor 00:28:13.393 [2024-11-25 12:27:09.161195] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:28:13.393 [2024-11-25 12:27:09.161275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:13.393 [2024-11-25 12:27:09.161294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:13.393 [2024-11-25 12:27:09.161314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:13.393 [2024-11-25 12:27:09.161331] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:13.393 [2024-11-25 12:27:09.161342] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:13.393 [2024-11-25 12:27:09.161352] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:13.393 [2024-11-25 12:27:09.161370] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:28:13.393 [2024-11-25 12:27:09.161380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:13.393 12:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.393 12:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:13.393 12:27:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:14.327 [2024-11-25 12:27:10.161456] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:28:14.327 [2024-11-25 12:27:10.161485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:14.327 [2024-11-25 12:27:10.161503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:14.327 [2024-11-25 12:27:10.161509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:14.327 [2024-11-25 12:27:10.161514] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:28:14.327 [2024-11-25 12:27:10.161519] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:28:14.327 [2024-11-25 12:27:10.161523] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:28:14.327 [2024-11-25 12:27:10.161526] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:14.327 [2024-11-25 12:27:10.161546] bdev_nvme.c:7230:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:28:14.327 [2024-11-25 12:27:10.161572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:14.327 [2024-11-25 12:27:10.161578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.327 [2024-11-25 12:27:10.161586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:14.327 [2024-11-25 12:27:10.161591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.327 [2024-11-25 12:27:10.161596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:14.327 [2024-11-25 12:27:10.161601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.327 [2024-11-25 12:27:10.161606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:14.327 [2024-11-25 12:27:10.161610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.327 [2024-11-25 12:27:10.161615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:14.327 [2024-11-25 12:27:10.161620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:14.327 [2024-11-25 12:27:10.161624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:28:14.327 [2024-11-25 12:27:10.162057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8ed2d0 (9): Bad file descriptor 00:28:14.327 [2024-11-25 12:27:10.163064] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:28:14.327 [2024-11-25 12:27:10.163081] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:28:14.327 12:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:14.327 12:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:14.327 12:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.327 12:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:14.327 12:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:14.327 12:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:14.327 12:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:14.327 12:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.585 12:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:28:14.585 12:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:28:14.585 12:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:14.585 12:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:28:14.585 12:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:14.585 12:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:14.585 12:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.585 12:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:14.585 12:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:14.585 12:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:14.585 12:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:14.585 12:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.585 12:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:14.585 12:27:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:15.516 12:27:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:15.516 12:27:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:15.516 12:27:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.516 12:27:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:15.516 12:27:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:15.516 12:27:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:15.516 12:27:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:15.516 12:27:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.516 12:27:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:15.516 12:27:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:16.449 [2024-11-25 12:27:12.174500] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:28:16.449 [2024-11-25 12:27:12.174529] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:28:16.449 [2024-11-25 12:27:12.174539] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:28:16.449 [2024-11-25 12:27:12.260590] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:28:16.449 12:27:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:16.449 12:27:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:16.449 12:27:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:16.449 12:27:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.449 12:27:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:16.449 12:27:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:16.449 12:27:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:16.449 12:27:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.449 [2024-11-25 12:27:12.314873] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:28:16.449 [2024-11-25 12:27:12.315267] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x95a070:1 started. 00:28:16.449 [2024-11-25 12:27:12.316225] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:16.449 [2024-11-25 12:27:12.316252] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:16.449 [2024-11-25 12:27:12.316266] bdev_nvme.c:8275:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:16.449 [2024-11-25 12:27:12.316276] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:28:16.449 [2024-11-25 12:27:12.316281] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:28:16.449 [2024-11-25 12:27:12.323223] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x95a070 was disconnected and freed. delete nvme_qpair. 00:28:16.707 12:27:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:16.707 12:27:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:17.641 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:17.641 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:17.641 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:17.641 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.641 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:17.641 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:17.641 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:17.641 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.641 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:28:17.641 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:28:17.641 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 90451 00:28:17.642 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 90451 ']' 00:28:17.642 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 90451 00:28:17.642 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:28:17.642 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:17.642 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90451 00:28:17.642 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:17.642 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:17.642 killing process with pid 90451 00:28:17.642 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90451' 00:28:17.642 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 90451 00:28:17.642 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 90451 00:28:17.642 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:28:17.642 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:17.642 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:28:17.901 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:17.901 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:28:17.901 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:17.901 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:17.901 rmmod nvme_tcp 00:28:17.901 rmmod nvme_fabrics 00:28:17.901 rmmod nvme_keyring 00:28:17.901 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:17.901 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:28:17.901 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:28:17.901 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 90401 ']' 00:28:17.901 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 90401 00:28:17.901 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 90401 ']' 00:28:17.901 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 90401 00:28:17.901 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:28:17.901 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:17.901 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90401 00:28:17.901 killing process with pid 90401 00:28:17.901 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:17.901 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:17.901 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90401' 00:28:17.901 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 90401 00:28:17.901 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 90401 00:28:17.901 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:17.901 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:17.901 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:17.902 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:28:17.902 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:28:17.902 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:17.902 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:28:17.902 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:17.902 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:17.902 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:17.902 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:17.902 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:17.902 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:17.902 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:18.161 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:18.161 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:18.161 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:18.161 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:18.161 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:18.161 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:18.161 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:18.161 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:18.161 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:18.161 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:18.161 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:18.161 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:18.161 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:28:18.161 00:28:18.161 real 0m14.894s 00:28:18.161 user 0m26.557s 00:28:18.161 sys 0m1.336s 00:28:18.161 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:18.161 12:27:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:28:18.161 ************************************ 00:28:18.161 END TEST nvmf_discovery_remove_ifc 00:28:18.161 ************************************ 00:28:18.161 12:27:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:18.161 12:27:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:18.161 12:27:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:18.161 12:27:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.161 ************************************ 00:28:18.161 START TEST nvmf_identify_kernel_target 00:28:18.161 ************************************ 00:28:18.161 12:27:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:28:18.421 * Looking for test storage... 00:28:18.421 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:28:18.421 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:18.421 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:18.421 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:28:18.421 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:18.421 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:18.421 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:18.421 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:18.421 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:28:18.421 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:28:18.421 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:28:18.421 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:28:18.421 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:28:18.421 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:28:18.421 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:28:18.421 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:18.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.422 --rc genhtml_branch_coverage=1 00:28:18.422 --rc genhtml_function_coverage=1 00:28:18.422 --rc genhtml_legend=1 00:28:18.422 --rc geninfo_all_blocks=1 00:28:18.422 --rc geninfo_unexecuted_blocks=1 00:28:18.422 00:28:18.422 ' 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:18.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.422 --rc genhtml_branch_coverage=1 00:28:18.422 --rc genhtml_function_coverage=1 00:28:18.422 --rc genhtml_legend=1 00:28:18.422 --rc geninfo_all_blocks=1 00:28:18.422 --rc geninfo_unexecuted_blocks=1 00:28:18.422 00:28:18.422 ' 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:18.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.422 --rc genhtml_branch_coverage=1 00:28:18.422 --rc genhtml_function_coverage=1 00:28:18.422 --rc genhtml_legend=1 00:28:18.422 --rc geninfo_all_blocks=1 00:28:18.422 --rc geninfo_unexecuted_blocks=1 00:28:18.422 00:28:18.422 ' 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:18.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:18.422 --rc genhtml_branch_coverage=1 00:28:18.422 --rc genhtml_function_coverage=1 00:28:18.422 --rc genhtml_legend=1 00:28:18.422 --rc geninfo_all_blocks=1 00:28:18.422 --rc geninfo_unexecuted_blocks=1 00:28:18.422 00:28:18.422 ' 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:18.422 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:28:18.422 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:28:18.423 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:18.423 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:28:18.423 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:28:18.423 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:28:18.423 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:18.423 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:28:18.423 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:18.423 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:28:18.423 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:18.423 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:28:18.423 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:18.423 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:18.423 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:18.423 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:18.423 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:18.423 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:18.423 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:28:18.423 Cannot find device "nvmf_init_br" 00:28:18.423 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:28:18.423 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:28:18.423 Cannot find device "nvmf_init_br2" 00:28:18.423 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:28:18.423 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:28:18.423 Cannot find device "nvmf_tgt_br" 00:28:18.423 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:28:18.423 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:28:18.423 Cannot find device "nvmf_tgt_br2" 00:28:18.423 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:28:18.423 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:28:18.423 Cannot find device "nvmf_init_br" 00:28:18.423 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:28:18.423 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:28:18.423 Cannot find device "nvmf_init_br2" 00:28:18.423 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:28:18.423 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:28:18.423 Cannot find device "nvmf_tgt_br" 00:28:18.423 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:28:18.423 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:28:18.423 Cannot find device "nvmf_tgt_br2" 00:28:18.423 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:28:18.423 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:28:18.423 Cannot find device "nvmf_br" 00:28:18.423 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:28:18.423 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:28:18.423 Cannot find device "nvmf_init_if" 00:28:18.423 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:28:18.423 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:28:18.423 Cannot find device "nvmf_init_if2" 00:28:18.423 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:28:18.423 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:18.423 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:18.423 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:28:18.423 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:18.423 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:18.423 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:28:18.423 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:28:18.423 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:18.423 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:28:18.423 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:18.682 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:18.682 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:18.682 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:18.682 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:18.682 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:28:18.682 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:28:18.682 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:28:18.682 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:28:18.682 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:28:18.682 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:28:18.682 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:28:18.682 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:28:18.682 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:28:18.682 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:18.682 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:18.682 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:18.682 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:28:18.682 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:28:18.682 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:28:18.682 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:28:18.682 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:18.682 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:18.682 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:18.682 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:28:18.682 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:28:18.683 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:28:18.683 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:18.683 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:28:18.683 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:28:18.683 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:18.683 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:28:18.683 00:28:18.683 --- 10.0.0.3 ping statistics --- 00:28:18.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:18.683 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:28:18.683 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:28:18.683 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:28:18.683 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.031 ms 00:28:18.683 00:28:18.683 --- 10.0.0.4 ping statistics --- 00:28:18.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:18.683 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:28:18.683 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:18.683 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:18.683 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:28:18.683 00:28:18.683 --- 10.0.0.1 ping statistics --- 00:28:18.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:18.683 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:28:18.683 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:28:18.683 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:18.683 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:28:18.683 00:28:18.683 --- 10.0.0.2 ping statistics --- 00:28:18.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:18.683 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:28:18.683 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:18.683 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:28:18.683 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:18.683 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:18.683 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:18.683 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:18.683 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:18.683 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:18.683 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:18.683 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:28:18.683 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:28:18.683 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:28:18.683 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:18.683 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:18.683 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.683 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.683 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:18.683 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.683 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:18.683 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:18.683 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:18.683 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:28:18.683 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:18.683 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:18.683 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:28:18.683 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:18.683 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:18.683 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:18.683 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:28:18.683 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:28:18.683 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:28:18.683 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:18.683 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:18.941 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:18.941 Waiting for block devices as requested 00:28:18.941 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:19.199 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:19.199 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:19.199 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:19.199 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:28:19.199 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:28:19.199 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:19.199 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:19.199 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:28:19.199 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:19.199 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:28:19.199 No valid GPT data, bailing 00:28:19.199 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:19.199 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:28:19.199 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:28:19.199 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:28:19.199 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:19.199 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:28:19.199 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:28:19.199 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:28:19.199 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:28:19.199 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:19.199 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:28:19.199 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:28:19.199 12:27:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:28:19.199 No valid GPT data, bailing 00:28:19.199 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:28:19.199 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:28:19.199 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:28:19.199 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:28:19.199 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:19.199 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:28:19.199 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:28:19.199 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:28:19.199 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:28:19.199 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:19.199 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:28:19.199 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:28:19.199 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:28:19.199 No valid GPT data, bailing 00:28:19.458 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:28:19.458 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:28:19.458 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:28:19.458 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:28:19.458 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:19.458 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:28:19.458 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:28:19.458 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:28:19.458 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:28:19.458 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:19.458 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:28:19.458 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:28:19.458 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:28:19.458 No valid GPT data, bailing 00:28:19.458 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:28:19.458 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:28:19.458 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:28:19.458 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:28:19.458 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:28:19.458 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:19.458 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:19.458 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:19.458 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:19.458 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:28:19.458 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:28:19.458 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:28:19.458 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:28:19.458 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:28:19.458 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:28:19.458 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:28:19.458 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:19.458 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid=5934b511-67a7-4c73-84f0-52fdf36bb79d -a 10.0.0.1 -t tcp -s 4420 00:28:19.458 00:28:19.459 Discovery Log Number of Records 2, Generation counter 2 00:28:19.459 =====Discovery Log Entry 0====== 00:28:19.459 trtype: tcp 00:28:19.459 adrfam: ipv4 00:28:19.459 subtype: current discovery subsystem 00:28:19.459 treq: not specified, sq flow control disable supported 00:28:19.459 portid: 1 00:28:19.459 trsvcid: 4420 00:28:19.459 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:19.459 traddr: 10.0.0.1 00:28:19.459 eflags: none 00:28:19.459 sectype: none 00:28:19.459 =====Discovery Log Entry 1====== 00:28:19.459 trtype: tcp 00:28:19.459 adrfam: ipv4 00:28:19.459 subtype: nvme subsystem 00:28:19.459 treq: not specified, sq flow control disable supported 00:28:19.459 portid: 1 00:28:19.459 trsvcid: 4420 00:28:19.459 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:19.459 traddr: 10.0.0.1 00:28:19.459 eflags: none 00:28:19.459 sectype: none 00:28:19.459 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:28:19.459 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:28:19.718 ===================================================== 00:28:19.718 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:19.718 ===================================================== 00:28:19.718 Controller Capabilities/Features 00:28:19.718 ================================ 00:28:19.718 Vendor ID: 0000 00:28:19.718 Subsystem Vendor ID: 0000 00:28:19.718 Serial Number: 2aa8f8d0daf6d03c0eb9 00:28:19.718 Model Number: Linux 00:28:19.718 Firmware Version: 6.8.9-20 00:28:19.718 Recommended Arb Burst: 0 00:28:19.718 IEEE OUI Identifier: 00 00 00 00:28:19.718 Multi-path I/O 00:28:19.718 May have multiple subsystem ports: No 00:28:19.718 May have multiple controllers: No 00:28:19.718 Associated with SR-IOV VF: No 00:28:19.718 Max Data Transfer Size: Unlimited 00:28:19.718 Max Number of Namespaces: 0 00:28:19.718 Max Number of I/O Queues: 1024 00:28:19.718 NVMe Specification Version (VS): 1.3 00:28:19.718 NVMe Specification Version (Identify): 1.3 00:28:19.718 Maximum Queue Entries: 1024 00:28:19.718 Contiguous Queues Required: No 00:28:19.718 Arbitration Mechanisms Supported 00:28:19.718 Weighted Round Robin: Not Supported 00:28:19.718 Vendor Specific: Not Supported 00:28:19.718 Reset Timeout: 7500 ms 00:28:19.718 Doorbell Stride: 4 bytes 00:28:19.718 NVM Subsystem Reset: Not Supported 00:28:19.718 Command Sets Supported 00:28:19.718 NVM Command Set: Supported 00:28:19.718 Boot Partition: Not Supported 00:28:19.718 Memory Page Size Minimum: 4096 bytes 00:28:19.718 Memory Page Size Maximum: 4096 bytes 00:28:19.718 Persistent Memory Region: Not Supported 00:28:19.718 Optional Asynchronous Events Supported 00:28:19.718 Namespace Attribute Notices: Not Supported 00:28:19.718 Firmware Activation Notices: Not Supported 00:28:19.718 ANA Change Notices: Not Supported 00:28:19.718 PLE Aggregate Log Change Notices: Not Supported 00:28:19.718 LBA Status Info Alert Notices: Not Supported 00:28:19.718 EGE Aggregate Log Change Notices: Not Supported 00:28:19.718 Normal NVM Subsystem Shutdown event: Not Supported 00:28:19.718 Zone Descriptor Change Notices: Not Supported 00:28:19.718 Discovery Log Change Notices: Supported 00:28:19.718 Controller Attributes 00:28:19.718 128-bit Host Identifier: Not Supported 00:28:19.718 Non-Operational Permissive Mode: Not Supported 00:28:19.718 NVM Sets: Not Supported 00:28:19.718 Read Recovery Levels: Not Supported 00:28:19.718 Endurance Groups: Not Supported 00:28:19.718 Predictable Latency Mode: Not Supported 00:28:19.718 Traffic Based Keep ALive: Not Supported 00:28:19.718 Namespace Granularity: Not Supported 00:28:19.718 SQ Associations: Not Supported 00:28:19.718 UUID List: Not Supported 00:28:19.718 Multi-Domain Subsystem: Not Supported 00:28:19.718 Fixed Capacity Management: Not Supported 00:28:19.718 Variable Capacity Management: Not Supported 00:28:19.718 Delete Endurance Group: Not Supported 00:28:19.718 Delete NVM Set: Not Supported 00:28:19.718 Extended LBA Formats Supported: Not Supported 00:28:19.718 Flexible Data Placement Supported: Not Supported 00:28:19.718 00:28:19.718 Controller Memory Buffer Support 00:28:19.718 ================================ 00:28:19.718 Supported: No 00:28:19.718 00:28:19.718 Persistent Memory Region Support 00:28:19.718 ================================ 00:28:19.718 Supported: No 00:28:19.718 00:28:19.718 Admin Command Set Attributes 00:28:19.718 ============================ 00:28:19.718 Security Send/Receive: Not Supported 00:28:19.718 Format NVM: Not Supported 00:28:19.718 Firmware Activate/Download: Not Supported 00:28:19.718 Namespace Management: Not Supported 00:28:19.718 Device Self-Test: Not Supported 00:28:19.718 Directives: Not Supported 00:28:19.718 NVMe-MI: Not Supported 00:28:19.718 Virtualization Management: Not Supported 00:28:19.718 Doorbell Buffer Config: Not Supported 00:28:19.718 Get LBA Status Capability: Not Supported 00:28:19.718 Command & Feature Lockdown Capability: Not Supported 00:28:19.718 Abort Command Limit: 1 00:28:19.718 Async Event Request Limit: 1 00:28:19.718 Number of Firmware Slots: N/A 00:28:19.718 Firmware Slot 1 Read-Only: N/A 00:28:19.718 Firmware Activation Without Reset: N/A 00:28:19.718 Multiple Update Detection Support: N/A 00:28:19.718 Firmware Update Granularity: No Information Provided 00:28:19.718 Per-Namespace SMART Log: No 00:28:19.718 Asymmetric Namespace Access Log Page: Not Supported 00:28:19.718 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:19.718 Command Effects Log Page: Not Supported 00:28:19.718 Get Log Page Extended Data: Supported 00:28:19.718 Telemetry Log Pages: Not Supported 00:28:19.718 Persistent Event Log Pages: Not Supported 00:28:19.718 Supported Log Pages Log Page: May Support 00:28:19.718 Commands Supported & Effects Log Page: Not Supported 00:28:19.718 Feature Identifiers & Effects Log Page:May Support 00:28:19.718 NVMe-MI Commands & Effects Log Page: May Support 00:28:19.718 Data Area 4 for Telemetry Log: Not Supported 00:28:19.718 Error Log Page Entries Supported: 1 00:28:19.718 Keep Alive: Not Supported 00:28:19.718 00:28:19.718 NVM Command Set Attributes 00:28:19.718 ========================== 00:28:19.718 Submission Queue Entry Size 00:28:19.718 Max: 1 00:28:19.718 Min: 1 00:28:19.718 Completion Queue Entry Size 00:28:19.718 Max: 1 00:28:19.718 Min: 1 00:28:19.718 Number of Namespaces: 0 00:28:19.718 Compare Command: Not Supported 00:28:19.718 Write Uncorrectable Command: Not Supported 00:28:19.718 Dataset Management Command: Not Supported 00:28:19.718 Write Zeroes Command: Not Supported 00:28:19.718 Set Features Save Field: Not Supported 00:28:19.718 Reservations: Not Supported 00:28:19.718 Timestamp: Not Supported 00:28:19.718 Copy: Not Supported 00:28:19.718 Volatile Write Cache: Not Present 00:28:19.718 Atomic Write Unit (Normal): 1 00:28:19.718 Atomic Write Unit (PFail): 1 00:28:19.718 Atomic Compare & Write Unit: 1 00:28:19.718 Fused Compare & Write: Not Supported 00:28:19.718 Scatter-Gather List 00:28:19.718 SGL Command Set: Supported 00:28:19.718 SGL Keyed: Not Supported 00:28:19.718 SGL Bit Bucket Descriptor: Not Supported 00:28:19.718 SGL Metadata Pointer: Not Supported 00:28:19.719 Oversized SGL: Not Supported 00:28:19.719 SGL Metadata Address: Not Supported 00:28:19.719 SGL Offset: Supported 00:28:19.719 Transport SGL Data Block: Not Supported 00:28:19.719 Replay Protected Memory Block: Not Supported 00:28:19.719 00:28:19.719 Firmware Slot Information 00:28:19.719 ========================= 00:28:19.719 Active slot: 0 00:28:19.719 00:28:19.719 00:28:19.719 Error Log 00:28:19.719 ========= 00:28:19.719 00:28:19.719 Active Namespaces 00:28:19.719 ================= 00:28:19.719 Discovery Log Page 00:28:19.719 ================== 00:28:19.719 Generation Counter: 2 00:28:19.719 Number of Records: 2 00:28:19.719 Record Format: 0 00:28:19.719 00:28:19.719 Discovery Log Entry 0 00:28:19.719 ---------------------- 00:28:19.719 Transport Type: 3 (TCP) 00:28:19.719 Address Family: 1 (IPv4) 00:28:19.719 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:19.719 Entry Flags: 00:28:19.719 Duplicate Returned Information: 0 00:28:19.719 Explicit Persistent Connection Support for Discovery: 0 00:28:19.719 Transport Requirements: 00:28:19.719 Secure Channel: Not Specified 00:28:19.719 Port ID: 1 (0x0001) 00:28:19.719 Controller ID: 65535 (0xffff) 00:28:19.719 Admin Max SQ Size: 32 00:28:19.719 Transport Service Identifier: 4420 00:28:19.719 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:19.719 Transport Address: 10.0.0.1 00:28:19.719 Discovery Log Entry 1 00:28:19.719 ---------------------- 00:28:19.719 Transport Type: 3 (TCP) 00:28:19.719 Address Family: 1 (IPv4) 00:28:19.719 Subsystem Type: 2 (NVM Subsystem) 00:28:19.719 Entry Flags: 00:28:19.719 Duplicate Returned Information: 0 00:28:19.719 Explicit Persistent Connection Support for Discovery: 0 00:28:19.719 Transport Requirements: 00:28:19.719 Secure Channel: Not Specified 00:28:19.719 Port ID: 1 (0x0001) 00:28:19.719 Controller ID: 65535 (0xffff) 00:28:19.719 Admin Max SQ Size: 32 00:28:19.719 Transport Service Identifier: 4420 00:28:19.719 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:28:19.719 Transport Address: 10.0.0.1 00:28:19.719 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:19.719 get_feature(0x01) failed 00:28:19.719 get_feature(0x02) failed 00:28:19.719 get_feature(0x04) failed 00:28:19.719 ===================================================== 00:28:19.719 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:19.719 ===================================================== 00:28:19.719 Controller Capabilities/Features 00:28:19.719 ================================ 00:28:19.719 Vendor ID: 0000 00:28:19.719 Subsystem Vendor ID: 0000 00:28:19.719 Serial Number: 14463b0df5886ce3ea77 00:28:19.719 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:28:19.719 Firmware Version: 6.8.9-20 00:28:19.719 Recommended Arb Burst: 6 00:28:19.719 IEEE OUI Identifier: 00 00 00 00:28:19.719 Multi-path I/O 00:28:19.719 May have multiple subsystem ports: Yes 00:28:19.719 May have multiple controllers: Yes 00:28:19.719 Associated with SR-IOV VF: No 00:28:19.719 Max Data Transfer Size: Unlimited 00:28:19.719 Max Number of Namespaces: 1024 00:28:19.719 Max Number of I/O Queues: 128 00:28:19.719 NVMe Specification Version (VS): 1.3 00:28:19.719 NVMe Specification Version (Identify): 1.3 00:28:19.719 Maximum Queue Entries: 1024 00:28:19.719 Contiguous Queues Required: No 00:28:19.719 Arbitration Mechanisms Supported 00:28:19.719 Weighted Round Robin: Not Supported 00:28:19.719 Vendor Specific: Not Supported 00:28:19.719 Reset Timeout: 7500 ms 00:28:19.719 Doorbell Stride: 4 bytes 00:28:19.719 NVM Subsystem Reset: Not Supported 00:28:19.719 Command Sets Supported 00:28:19.719 NVM Command Set: Supported 00:28:19.719 Boot Partition: Not Supported 00:28:19.719 Memory Page Size Minimum: 4096 bytes 00:28:19.719 Memory Page Size Maximum: 4096 bytes 00:28:19.719 Persistent Memory Region: Not Supported 00:28:19.719 Optional Asynchronous Events Supported 00:28:19.719 Namespace Attribute Notices: Supported 00:28:19.719 Firmware Activation Notices: Not Supported 00:28:19.719 ANA Change Notices: Supported 00:28:19.719 PLE Aggregate Log Change Notices: Not Supported 00:28:19.719 LBA Status Info Alert Notices: Not Supported 00:28:19.719 EGE Aggregate Log Change Notices: Not Supported 00:28:19.719 Normal NVM Subsystem Shutdown event: Not Supported 00:28:19.719 Zone Descriptor Change Notices: Not Supported 00:28:19.719 Discovery Log Change Notices: Not Supported 00:28:19.719 Controller Attributes 00:28:19.719 128-bit Host Identifier: Supported 00:28:19.719 Non-Operational Permissive Mode: Not Supported 00:28:19.719 NVM Sets: Not Supported 00:28:19.719 Read Recovery Levels: Not Supported 00:28:19.719 Endurance Groups: Not Supported 00:28:19.719 Predictable Latency Mode: Not Supported 00:28:19.719 Traffic Based Keep ALive: Supported 00:28:19.719 Namespace Granularity: Not Supported 00:28:19.719 SQ Associations: Not Supported 00:28:19.719 UUID List: Not Supported 00:28:19.719 Multi-Domain Subsystem: Not Supported 00:28:19.719 Fixed Capacity Management: Not Supported 00:28:19.719 Variable Capacity Management: Not Supported 00:28:19.719 Delete Endurance Group: Not Supported 00:28:19.719 Delete NVM Set: Not Supported 00:28:19.719 Extended LBA Formats Supported: Not Supported 00:28:19.719 Flexible Data Placement Supported: Not Supported 00:28:19.719 00:28:19.719 Controller Memory Buffer Support 00:28:19.719 ================================ 00:28:19.719 Supported: No 00:28:19.719 00:28:19.719 Persistent Memory Region Support 00:28:19.719 ================================ 00:28:19.719 Supported: No 00:28:19.719 00:28:19.719 Admin Command Set Attributes 00:28:19.719 ============================ 00:28:19.719 Security Send/Receive: Not Supported 00:28:19.719 Format NVM: Not Supported 00:28:19.719 Firmware Activate/Download: Not Supported 00:28:19.719 Namespace Management: Not Supported 00:28:19.719 Device Self-Test: Not Supported 00:28:19.719 Directives: Not Supported 00:28:19.719 NVMe-MI: Not Supported 00:28:19.719 Virtualization Management: Not Supported 00:28:19.719 Doorbell Buffer Config: Not Supported 00:28:19.719 Get LBA Status Capability: Not Supported 00:28:19.719 Command & Feature Lockdown Capability: Not Supported 00:28:19.719 Abort Command Limit: 4 00:28:19.719 Async Event Request Limit: 4 00:28:19.719 Number of Firmware Slots: N/A 00:28:19.719 Firmware Slot 1 Read-Only: N/A 00:28:19.719 Firmware Activation Without Reset: N/A 00:28:19.719 Multiple Update Detection Support: N/A 00:28:19.719 Firmware Update Granularity: No Information Provided 00:28:19.719 Per-Namespace SMART Log: Yes 00:28:19.719 Asymmetric Namespace Access Log Page: Supported 00:28:19.719 ANA Transition Time : 10 sec 00:28:19.719 00:28:19.719 Asymmetric Namespace Access Capabilities 00:28:19.719 ANA Optimized State : Supported 00:28:19.719 ANA Non-Optimized State : Supported 00:28:19.719 ANA Inaccessible State : Supported 00:28:19.719 ANA Persistent Loss State : Supported 00:28:19.719 ANA Change State : Supported 00:28:19.719 ANAGRPID is not changed : No 00:28:19.719 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:28:19.719 00:28:19.719 ANA Group Identifier Maximum : 128 00:28:19.719 Number of ANA Group Identifiers : 128 00:28:19.719 Max Number of Allowed Namespaces : 1024 00:28:19.719 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:28:19.719 Command Effects Log Page: Supported 00:28:19.719 Get Log Page Extended Data: Supported 00:28:19.719 Telemetry Log Pages: Not Supported 00:28:19.719 Persistent Event Log Pages: Not Supported 00:28:19.719 Supported Log Pages Log Page: May Support 00:28:19.719 Commands Supported & Effects Log Page: Not Supported 00:28:19.719 Feature Identifiers & Effects Log Page:May Support 00:28:19.719 NVMe-MI Commands & Effects Log Page: May Support 00:28:19.719 Data Area 4 for Telemetry Log: Not Supported 00:28:19.719 Error Log Page Entries Supported: 128 00:28:19.719 Keep Alive: Supported 00:28:19.719 Keep Alive Granularity: 1000 ms 00:28:19.719 00:28:19.719 NVM Command Set Attributes 00:28:19.719 ========================== 00:28:19.719 Submission Queue Entry Size 00:28:19.719 Max: 64 00:28:19.719 Min: 64 00:28:19.719 Completion Queue Entry Size 00:28:19.719 Max: 16 00:28:19.719 Min: 16 00:28:19.719 Number of Namespaces: 1024 00:28:19.719 Compare Command: Not Supported 00:28:19.719 Write Uncorrectable Command: Not Supported 00:28:19.719 Dataset Management Command: Supported 00:28:19.719 Write Zeroes Command: Supported 00:28:19.719 Set Features Save Field: Not Supported 00:28:19.719 Reservations: Not Supported 00:28:19.719 Timestamp: Not Supported 00:28:19.720 Copy: Not Supported 00:28:19.720 Volatile Write Cache: Present 00:28:19.720 Atomic Write Unit (Normal): 1 00:28:19.720 Atomic Write Unit (PFail): 1 00:28:19.720 Atomic Compare & Write Unit: 1 00:28:19.720 Fused Compare & Write: Not Supported 00:28:19.720 Scatter-Gather List 00:28:19.720 SGL Command Set: Supported 00:28:19.720 SGL Keyed: Not Supported 00:28:19.720 SGL Bit Bucket Descriptor: Not Supported 00:28:19.720 SGL Metadata Pointer: Not Supported 00:28:19.720 Oversized SGL: Not Supported 00:28:19.720 SGL Metadata Address: Not Supported 00:28:19.720 SGL Offset: Supported 00:28:19.720 Transport SGL Data Block: Not Supported 00:28:19.720 Replay Protected Memory Block: Not Supported 00:28:19.720 00:28:19.720 Firmware Slot Information 00:28:19.720 ========================= 00:28:19.720 Active slot: 0 00:28:19.720 00:28:19.720 Asymmetric Namespace Access 00:28:19.720 =========================== 00:28:19.720 Change Count : 0 00:28:19.720 Number of ANA Group Descriptors : 1 00:28:19.720 ANA Group Descriptor : 0 00:28:19.720 ANA Group ID : 1 00:28:19.720 Number of NSID Values : 1 00:28:19.720 Change Count : 0 00:28:19.720 ANA State : 1 00:28:19.720 Namespace Identifier : 1 00:28:19.720 00:28:19.720 Commands Supported and Effects 00:28:19.720 ============================== 00:28:19.720 Admin Commands 00:28:19.720 -------------- 00:28:19.720 Get Log Page (02h): Supported 00:28:19.720 Identify (06h): Supported 00:28:19.720 Abort (08h): Supported 00:28:19.720 Set Features (09h): Supported 00:28:19.720 Get Features (0Ah): Supported 00:28:19.720 Asynchronous Event Request (0Ch): Supported 00:28:19.720 Keep Alive (18h): Supported 00:28:19.720 I/O Commands 00:28:19.720 ------------ 00:28:19.720 Flush (00h): Supported 00:28:19.720 Write (01h): Supported LBA-Change 00:28:19.720 Read (02h): Supported 00:28:19.720 Write Zeroes (08h): Supported LBA-Change 00:28:19.720 Dataset Management (09h): Supported 00:28:19.720 00:28:19.720 Error Log 00:28:19.720 ========= 00:28:19.720 Entry: 0 00:28:19.720 Error Count: 0x3 00:28:19.720 Submission Queue Id: 0x0 00:28:19.720 Command Id: 0x5 00:28:19.720 Phase Bit: 0 00:28:19.720 Status Code: 0x2 00:28:19.720 Status Code Type: 0x0 00:28:19.720 Do Not Retry: 1 00:28:19.720 Error Location: 0x28 00:28:19.720 LBA: 0x0 00:28:19.720 Namespace: 0x0 00:28:19.720 Vendor Log Page: 0x0 00:28:19.720 ----------- 00:28:19.720 Entry: 1 00:28:19.720 Error Count: 0x2 00:28:19.720 Submission Queue Id: 0x0 00:28:19.720 Command Id: 0x5 00:28:19.720 Phase Bit: 0 00:28:19.720 Status Code: 0x2 00:28:19.720 Status Code Type: 0x0 00:28:19.720 Do Not Retry: 1 00:28:19.720 Error Location: 0x28 00:28:19.720 LBA: 0x0 00:28:19.720 Namespace: 0x0 00:28:19.720 Vendor Log Page: 0x0 00:28:19.720 ----------- 00:28:19.720 Entry: 2 00:28:19.720 Error Count: 0x1 00:28:19.720 Submission Queue Id: 0x0 00:28:19.720 Command Id: 0x4 00:28:19.720 Phase Bit: 0 00:28:19.720 Status Code: 0x2 00:28:19.720 Status Code Type: 0x0 00:28:19.720 Do Not Retry: 1 00:28:19.720 Error Location: 0x28 00:28:19.720 LBA: 0x0 00:28:19.720 Namespace: 0x0 00:28:19.720 Vendor Log Page: 0x0 00:28:19.720 00:28:19.720 Number of Queues 00:28:19.720 ================ 00:28:19.720 Number of I/O Submission Queues: 128 00:28:19.720 Number of I/O Completion Queues: 128 00:28:19.720 00:28:19.720 ZNS Specific Controller Data 00:28:19.720 ============================ 00:28:19.720 Zone Append Size Limit: 0 00:28:19.720 00:28:19.720 00:28:19.720 Active Namespaces 00:28:19.720 ================= 00:28:19.720 get_feature(0x05) failed 00:28:19.720 Namespace ID:1 00:28:19.720 Command Set Identifier: NVM (00h) 00:28:19.720 Deallocate: Supported 00:28:19.720 Deallocated/Unwritten Error: Not Supported 00:28:19.720 Deallocated Read Value: Unknown 00:28:19.720 Deallocate in Write Zeroes: Not Supported 00:28:19.720 Deallocated Guard Field: 0xFFFF 00:28:19.720 Flush: Supported 00:28:19.720 Reservation: Not Supported 00:28:19.720 Namespace Sharing Capabilities: Multiple Controllers 00:28:19.720 Size (in LBAs): 1310720 (5GiB) 00:28:19.720 Capacity (in LBAs): 1310720 (5GiB) 00:28:19.720 Utilization (in LBAs): 1310720 (5GiB) 00:28:19.720 UUID: 570d029b-0b67-40bc-bd68-eb6c74300f7b 00:28:19.720 Thin Provisioning: Not Supported 00:28:19.720 Per-NS Atomic Units: Yes 00:28:19.720 Atomic Boundary Size (Normal): 0 00:28:19.720 Atomic Boundary Size (PFail): 0 00:28:19.720 Atomic Boundary Offset: 0 00:28:19.720 NGUID/EUI64 Never Reused: No 00:28:19.720 ANA group ID: 1 00:28:19.720 Namespace Write Protected: No 00:28:19.720 Number of LBA Formats: 1 00:28:19.720 Current LBA Format: LBA Format #00 00:28:19.720 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:28:19.720 00:28:19.720 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:28:19.720 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:19.720 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:28:19.720 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:19.720 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:28:19.720 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:19.720 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:19.720 rmmod nvme_tcp 00:28:19.720 rmmod nvme_fabrics 00:28:19.979 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:19.979 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:28:19.979 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:28:19.979 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:28:19.979 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:19.979 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:19.979 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:19.979 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:28:19.979 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:28:19.979 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:28:19.979 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:19.979 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:19.979 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:19.979 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:19.979 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:19.979 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:19.979 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:19.979 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:19.979 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:19.979 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:19.979 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:19.979 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:19.979 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:19.979 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:19.979 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:19.979 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:19.979 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:19.979 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:19.979 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:19.979 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:19.979 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:28:19.979 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:28:19.979 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:19.979 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:28:19.979 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:19.979 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:19.979 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:19.979 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:19.979 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:28:19.979 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:28:20.237 12:27:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:20.496 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:20.753 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:28:20.753 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:28:20.753 00:28:20.753 real 0m2.520s 00:28:20.753 user 0m0.855s 00:28:20.753 sys 0m1.077s 00:28:20.753 12:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:20.753 ************************************ 00:28:20.753 END TEST nvmf_identify_kernel_target 00:28:20.753 ************************************ 00:28:20.753 12:27:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:20.753 12:27:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:20.753 12:27:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:20.753 12:27:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:20.753 12:27:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.753 ************************************ 00:28:20.753 START TEST nvmf_auth_host 00:28:20.754 ************************************ 00:28:20.754 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:20.754 * Looking for test storage... 00:28:20.754 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:28:20.754 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:20.754 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:28:20.754 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:21.013 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:21.013 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:21.013 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:21.013 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:21.013 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:28:21.013 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:28:21.013 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:28:21.013 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:28:21.013 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:28:21.013 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:28:21.013 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:28:21.013 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:21.013 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:28:21.013 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:28:21.013 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:21.013 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:21.013 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:28:21.013 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:28:21.013 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:21.013 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:28:21.013 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:28:21.013 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:28:21.013 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:28:21.013 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:21.013 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:28:21.013 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:28:21.013 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:21.013 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:21.013 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:28:21.013 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:21.013 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:21.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:21.013 --rc genhtml_branch_coverage=1 00:28:21.013 --rc genhtml_function_coverage=1 00:28:21.013 --rc genhtml_legend=1 00:28:21.013 --rc geninfo_all_blocks=1 00:28:21.013 --rc geninfo_unexecuted_blocks=1 00:28:21.013 00:28:21.013 ' 00:28:21.013 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:21.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:21.013 --rc genhtml_branch_coverage=1 00:28:21.013 --rc genhtml_function_coverage=1 00:28:21.013 --rc genhtml_legend=1 00:28:21.013 --rc geninfo_all_blocks=1 00:28:21.013 --rc geninfo_unexecuted_blocks=1 00:28:21.013 00:28:21.013 ' 00:28:21.013 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:21.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:21.013 --rc genhtml_branch_coverage=1 00:28:21.013 --rc genhtml_function_coverage=1 00:28:21.013 --rc genhtml_legend=1 00:28:21.013 --rc geninfo_all_blocks=1 00:28:21.013 --rc geninfo_unexecuted_blocks=1 00:28:21.013 00:28:21.013 ' 00:28:21.013 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:21.013 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:21.013 --rc genhtml_branch_coverage=1 00:28:21.013 --rc genhtml_function_coverage=1 00:28:21.013 --rc genhtml_legend=1 00:28:21.013 --rc geninfo_all_blocks=1 00:28:21.013 --rc geninfo_unexecuted_blocks=1 00:28:21.013 00:28:21.013 ' 00:28:21.013 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:21.014 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:21.014 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:28:21.015 Cannot find device "nvmf_init_br" 00:28:21.015 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:28:21.015 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:28:21.015 Cannot find device "nvmf_init_br2" 00:28:21.015 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:28:21.015 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:28:21.015 Cannot find device "nvmf_tgt_br" 00:28:21.015 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:28:21.015 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:28:21.015 Cannot find device "nvmf_tgt_br2" 00:28:21.015 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:28:21.015 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:28:21.015 Cannot find device "nvmf_init_br" 00:28:21.015 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:28:21.015 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:28:21.015 Cannot find device "nvmf_init_br2" 00:28:21.015 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:28:21.015 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:28:21.015 Cannot find device "nvmf_tgt_br" 00:28:21.015 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:28:21.015 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:28:21.015 Cannot find device "nvmf_tgt_br2" 00:28:21.015 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:28:21.015 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:28:21.015 Cannot find device "nvmf_br" 00:28:21.015 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:28:21.015 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:28:21.015 Cannot find device "nvmf_init_if" 00:28:21.015 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:28:21.015 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:28:21.015 Cannot find device "nvmf_init_if2" 00:28:21.015 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:28:21.015 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:21.015 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:21.015 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:28:21.015 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:21.015 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:21.015 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:28:21.015 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:28:21.015 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:21.015 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:28:21.015 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:21.015 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:21.015 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:21.015 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:21.275 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:21.275 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:28:21.275 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:28:21.275 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:28:21.275 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:28:21.275 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:28:21.275 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:28:21.275 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:28:21.275 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:28:21.275 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:28:21.275 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:21.275 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:21.275 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:21.275 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:28:21.275 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:28:21.275 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:28:21.275 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:28:21.275 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:21.275 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:21.275 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:21.275 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:28:21.275 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:28:21.275 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:28:21.275 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:21.275 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:28:21.275 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:28:21.275 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:21.275 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:28:21.275 00:28:21.275 --- 10.0.0.3 ping statistics --- 00:28:21.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.275 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:28:21.275 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:28:21.275 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:28:21.275 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.072 ms 00:28:21.275 00:28:21.275 --- 10.0.0.4 ping statistics --- 00:28:21.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.275 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:28:21.275 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:21.275 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:21.275 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:28:21.275 00:28:21.275 --- 10.0.0.1 ping statistics --- 00:28:21.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.275 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:28:21.275 12:27:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:28:21.275 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:21.275 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:28:21.275 00:28:21.275 --- 10.0.0.2 ping statistics --- 00:28:21.275 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.275 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:28:21.275 12:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:21.275 12:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:28:21.275 12:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:21.275 12:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:21.275 12:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:21.275 12:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:21.275 12:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:21.275 12:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:21.275 12:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:21.275 12:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:28:21.275 12:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:21.275 12:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:21.275 12:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.275 12:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=91465 00:28:21.275 12:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 91465 00:28:21.275 12:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 91465 ']' 00:28:21.275 12:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:21.275 12:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:28:21.275 12:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:21.275 12:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:21.275 12:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:21.275 12:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.229 12:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:22.229 12:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:28:22.229 12:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:22.229 12:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:22.229 12:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.229 12:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:22.229 12:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:28:22.229 12:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:28:22.229 12:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:22.229 12:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:22.229 12:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:22.229 12:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:22.229 12:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:22.229 12:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:22.229 12:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=cf75607f042af21f3f09e4324c294a89 00:28:22.229 12:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:22.229 12:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.dGa 00:28:22.229 12:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key cf75607f042af21f3f09e4324c294a89 0 00:28:22.229 12:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 cf75607f042af21f3f09e4324c294a89 0 00:28:22.229 12:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:22.229 12:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:22.229 12:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=cf75607f042af21f3f09e4324c294a89 00:28:22.229 12:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:22.229 12:27:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:22.229 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.dGa 00:28:22.229 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.dGa 00:28:22.229 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.dGa 00:28:22.229 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:28:22.229 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:22.229 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:22.229 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:22.229 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:28:22.229 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:28:22.229 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:22.229 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=48c19277bd53499f74dbbabc70a66fe4e334fffa3de549d218fee55d1fe25131 00:28:22.229 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:28:22.229 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.EZy 00:28:22.229 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 48c19277bd53499f74dbbabc70a66fe4e334fffa3de549d218fee55d1fe25131 3 00:28:22.229 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 48c19277bd53499f74dbbabc70a66fe4e334fffa3de549d218fee55d1fe25131 3 00:28:22.229 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:22.229 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:22.229 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=48c19277bd53499f74dbbabc70a66fe4e334fffa3de549d218fee55d1fe25131 00:28:22.229 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:28:22.229 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:22.229 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.EZy 00:28:22.229 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.EZy 00:28:22.229 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.EZy 00:28:22.229 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:28:22.229 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:22.229 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:22.229 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:22.229 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:22.229 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:22.229 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:22.229 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4ae2b3fc3805eca7dc3cbdf7554a643dac0638a475e65072 00:28:22.229 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:22.229 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.4gD 00:28:22.229 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4ae2b3fc3805eca7dc3cbdf7554a643dac0638a475e65072 0 00:28:22.229 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4ae2b3fc3805eca7dc3cbdf7554a643dac0638a475e65072 0 00:28:22.229 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:22.229 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:22.229 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4ae2b3fc3805eca7dc3cbdf7554a643dac0638a475e65072 00:28:22.229 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:22.229 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.4gD 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.4gD 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.4gD 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=bee2843ba4b2f507a6f412f264130353e68e17d689654429 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.aJO 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key bee2843ba4b2f507a6f412f264130353e68e17d689654429 2 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 bee2843ba4b2f507a6f412f264130353e68e17d689654429 2 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=bee2843ba4b2f507a6f412f264130353e68e17d689654429 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.aJO 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.aJO 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.aJO 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b074f325ecf566fdfd21c4629fc5c62a 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.b9z 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b074f325ecf566fdfd21c4629fc5c62a 1 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b074f325ecf566fdfd21c4629fc5c62a 1 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b074f325ecf566fdfd21c4629fc5c62a 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.b9z 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.b9z 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.b9z 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=dad7646a3dce4d8d17850db4f3aabf8a 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.UET 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key dad7646a3dce4d8d17850db4f3aabf8a 1 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 dad7646a3dce4d8d17850db4f3aabf8a 1 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=dad7646a3dce4d8d17850db4f3aabf8a 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:28:22.494 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:22.495 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.UET 00:28:22.495 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.UET 00:28:22.495 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.UET 00:28:22.495 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:28:22.495 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:22.495 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:22.495 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:22.495 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:28:22.495 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:22.495 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:22.495 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=bb834c940d5a3701eee61b1e511ce18834477c6f64d816a1 00:28:22.495 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:28:22.495 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.SPi 00:28:22.495 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key bb834c940d5a3701eee61b1e511ce18834477c6f64d816a1 2 00:28:22.495 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 bb834c940d5a3701eee61b1e511ce18834477c6f64d816a1 2 00:28:22.495 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:22.495 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:22.495 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=bb834c940d5a3701eee61b1e511ce18834477c6f64d816a1 00:28:22.495 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:28:22.495 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:22.495 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.SPi 00:28:22.495 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.SPi 00:28:22.495 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.SPi 00:28:22.495 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:28:22.495 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:22.495 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:22.495 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:22.495 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:22.495 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:22.495 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:22.495 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d9c1d81bb9646d7e198ffd9314501a4c 00:28:22.495 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:22.495 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.dtj 00:28:22.495 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d9c1d81bb9646d7e198ffd9314501a4c 0 00:28:22.495 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d9c1d81bb9646d7e198ffd9314501a4c 0 00:28:22.495 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:22.495 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:22.495 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d9c1d81bb9646d7e198ffd9314501a4c 00:28:22.495 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:22.495 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:22.495 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.dtj 00:28:22.495 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.dtj 00:28:22.495 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.dtj 00:28:22.495 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:28:22.495 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:22.495 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:22.495 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:22.753 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:28:22.753 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:28:22.753 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:22.753 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=acff56e39eff317dccbc2bc650d1b640cefaceda230f21a929d02873b39659d8 00:28:22.753 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:28:22.753 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.7XD 00:28:22.753 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key acff56e39eff317dccbc2bc650d1b640cefaceda230f21a929d02873b39659d8 3 00:28:22.753 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 acff56e39eff317dccbc2bc650d1b640cefaceda230f21a929d02873b39659d8 3 00:28:22.753 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:22.753 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:22.753 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=acff56e39eff317dccbc2bc650d1b640cefaceda230f21a929d02873b39659d8 00:28:22.753 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:28:22.753 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:22.753 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.7XD 00:28:22.753 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.7XD 00:28:22.753 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.7XD 00:28:22.753 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:28:22.753 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 91465 00:28:22.753 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 91465 ']' 00:28:22.753 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:22.753 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:22.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:22.753 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:22.753 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:22.753 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.011 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:23.011 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:28:23.011 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:23.011 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.dGa 00:28:23.011 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.011 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.011 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.011 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.EZy ]] 00:28:23.011 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.EZy 00:28:23.011 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.011 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.011 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.011 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:23.011 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.4gD 00:28:23.011 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.011 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.011 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.011 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.aJO ]] 00:28:23.011 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.aJO 00:28:23.011 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.011 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.011 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.011 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:23.011 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.b9z 00:28:23.011 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.011 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.011 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.012 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.UET ]] 00:28:23.012 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.UET 00:28:23.012 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.012 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.012 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.012 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:23.012 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.SPi 00:28:23.012 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.012 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.012 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.012 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.dtj ]] 00:28:23.012 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.dtj 00:28:23.012 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.012 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.012 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.012 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:23.012 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.7XD 00:28:23.012 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.012 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.012 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.012 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:28:23.012 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:28:23.012 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:28:23.012 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:23.012 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:23.012 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:23.012 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:23.012 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:23.012 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:23.012 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:23.012 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:23.012 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:23.012 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:23.012 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:28:23.012 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:28:23.012 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:28:23.012 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:23.012 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:23.012 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:23.012 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:28:23.012 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:28:23.012 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:28:23.012 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:23.012 12:27:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:23.269 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:23.269 Waiting for block devices as requested 00:28:23.269 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:23.269 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:23.834 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:23.834 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:23.834 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:28:23.834 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:28:23.834 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:23.834 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:23.834 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:28:23.834 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:23.834 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:28:23.834 No valid GPT data, bailing 00:28:23.834 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:23.834 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:28:23.834 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:28:23.834 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:28:23.834 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:23.834 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:28:23.834 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:28:23.834 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:28:23.834 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:28:23.834 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:23.834 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:28:23.834 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:28:23.834 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:28:23.834 No valid GPT data, bailing 00:28:23.834 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:28:23.834 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:28:23.834 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:28:23.834 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:28:23.834 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:23.834 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:28:23.834 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:28:23.834 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:28:23.834 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:28:23.834 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:23.834 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:28:23.834 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:28:23.834 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:28:23.834 No valid GPT data, bailing 00:28:23.834 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:28:23.834 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:28:23.834 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:28:23.834 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:28:23.834 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:23.834 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:28:23.834 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:28:23.834 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:28:23.834 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:28:23.834 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:23.834 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:28:23.834 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:28:23.834 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:28:24.092 No valid GPT data, bailing 00:28:24.092 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:28:24.092 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:28:24.092 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:28:24.092 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:28:24.092 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:28:24.092 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:24.092 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:24.092 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:24.092 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:28:24.092 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:28:24.092 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:28:24.092 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:28:24.092 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:28:24.092 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:28:24.092 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:28:24.092 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:28:24.092 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:24.092 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid=5934b511-67a7-4c73-84f0-52fdf36bb79d -a 10.0.0.1 -t tcp -s 4420 00:28:24.092 00:28:24.092 Discovery Log Number of Records 2, Generation counter 2 00:28:24.092 =====Discovery Log Entry 0====== 00:28:24.092 trtype: tcp 00:28:24.092 adrfam: ipv4 00:28:24.092 subtype: current discovery subsystem 00:28:24.092 treq: not specified, sq flow control disable supported 00:28:24.092 portid: 1 00:28:24.092 trsvcid: 4420 00:28:24.092 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:24.092 traddr: 10.0.0.1 00:28:24.092 eflags: none 00:28:24.092 sectype: none 00:28:24.092 =====Discovery Log Entry 1====== 00:28:24.092 trtype: tcp 00:28:24.092 adrfam: ipv4 00:28:24.092 subtype: nvme subsystem 00:28:24.092 treq: not specified, sq flow control disable supported 00:28:24.092 portid: 1 00:28:24.092 trsvcid: 4420 00:28:24.092 subnqn: nqn.2024-02.io.spdk:cnode0 00:28:24.092 traddr: 10.0.0.1 00:28:24.092 eflags: none 00:28:24.092 sectype: none 00:28:24.092 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:24.092 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:28:24.092 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:24.092 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:24.092 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.092 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:24.092 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:24.092 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:24.092 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGFlMmIzZmMzODA1ZWNhN2RjM2NiZGY3NTU0YTY0M2RhYzA2MzhhNDc1ZTY1MDcyP0w/ng==: 00:28:24.092 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: 00:28:24.092 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:24.092 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:24.092 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGFlMmIzZmMzODA1ZWNhN2RjM2NiZGY3NTU0YTY0M2RhYzA2MzhhNDc1ZTY1MDcyP0w/ng==: 00:28:24.092 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: ]] 00:28:24.092 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: 00:28:24.092 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:24.092 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:28:24.092 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:24.092 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:24.092 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:28:24.092 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.092 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:28:24.092 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:24.092 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:24.093 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.093 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:24.093 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.093 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.093 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.093 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.093 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:24.093 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:24.093 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:24.093 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.093 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.093 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:24.093 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.093 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:24.093 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:24.093 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:24.093 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:24.093 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.093 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.351 nvme0n1 00:28:24.351 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.351 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.351 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.351 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.351 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.351 12:27:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.351 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.351 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.351 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.351 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.351 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.351 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:24.351 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:24.351 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.351 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y3NTYwN2YwNDJhZjIxZjNmMDllNDMyNGMyOTRhODlyenFt: 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDhjMTkyNzdiZDUzNDk5Zjc0ZGJiYWJjNzBhNjZmZTRlMzM0ZmZmYTNkZTU0OWQyMThmZWU1NWQxZmUyNTEzMRf8FiE=: 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y3NTYwN2YwNDJhZjIxZjNmMDllNDMyNGMyOTRhODlyenFt: 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDhjMTkyNzdiZDUzNDk5Zjc0ZGJiYWJjNzBhNjZmZTRlMzM0ZmZmYTNkZTU0OWQyMThmZWU1NWQxZmUyNTEzMRf8FiE=: ]] 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDhjMTkyNzdiZDUzNDk5Zjc0ZGJiYWJjNzBhNjZmZTRlMzM0ZmZmYTNkZTU0OWQyMThmZWU1NWQxZmUyNTEzMRf8FiE=: 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.352 nvme0n1 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGFlMmIzZmMzODA1ZWNhN2RjM2NiZGY3NTU0YTY0M2RhYzA2MzhhNDc1ZTY1MDcyP0w/ng==: 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGFlMmIzZmMzODA1ZWNhN2RjM2NiZGY3NTU0YTY0M2RhYzA2MzhhNDc1ZTY1MDcyP0w/ng==: 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: ]] 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.352 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.611 nvme0n1 00:28:24.611 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.611 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.611 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.611 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.611 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.611 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.611 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.611 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.611 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.611 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.611 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.611 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.611 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:24.611 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.611 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:24.611 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:24.611 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:24.611 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA3NGYzMjVlY2Y1NjZmZGZkMjFjNDYyOWZjNWM2MmHeZ/3p: 00:28:24.611 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGFkNzY0NmEzZGNlNGQ4ZDE3ODUwZGI0ZjNhYWJmOGFuLfZN: 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA3NGYzMjVlY2Y1NjZmZGZkMjFjNDYyOWZjNWM2MmHeZ/3p: 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGFkNzY0NmEzZGNlNGQ4ZDE3ODUwZGI0ZjNhYWJmOGFuLfZN: ]] 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGFkNzY0NmEzZGNlNGQ4ZDE3ODUwZGI0ZjNhYWJmOGFuLfZN: 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.612 nvme0n1 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmI4MzRjOTQwZDVhMzcwMWVlZTYxYjFlNTExY2UxODgzNDQ3N2M2ZjY0ZDgxNmExl6cJRg==: 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDljMWQ4MWJiOTY0NmQ3ZTE5OGZmZDkzMTQ1MDFhNGMny4oJ: 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmI4MzRjOTQwZDVhMzcwMWVlZTYxYjFlNTExY2UxODgzNDQ3N2M2ZjY0ZDgxNmExl6cJRg==: 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDljMWQ4MWJiOTY0NmQ3ZTE5OGZmZDkzMTQ1MDFhNGMny4oJ: ]] 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDljMWQ4MWJiOTY0NmQ3ZTE5OGZmZDkzMTQ1MDFhNGMny4oJ: 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.612 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.870 nvme0n1 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWNmZjU2ZTM5ZWZmMzE3ZGNjYmMyYmM2NTBkMWI2NDBjZWZhY2VkYTIzMGYyMWE5MjlkMDI4NzNiMzk2NTlkOHDyzYw=: 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWNmZjU2ZTM5ZWZmMzE3ZGNjYmMyYmM2NTBkMWI2NDBjZWZhY2VkYTIzMGYyMWE5MjlkMDI4NzNiMzk2NTlkOHDyzYw=: 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.870 nvme0n1 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y3NTYwN2YwNDJhZjIxZjNmMDllNDMyNGMyOTRhODlyenFt: 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDhjMTkyNzdiZDUzNDk5Zjc0ZGJiYWJjNzBhNjZmZTRlMzM0ZmZmYTNkZTU0OWQyMThmZWU1NWQxZmUyNTEzMRf8FiE=: 00:28:24.870 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:25.128 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:25.128 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y3NTYwN2YwNDJhZjIxZjNmMDllNDMyNGMyOTRhODlyenFt: 00:28:25.128 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDhjMTkyNzdiZDUzNDk5Zjc0ZGJiYWJjNzBhNjZmZTRlMzM0ZmZmYTNkZTU0OWQyMThmZWU1NWQxZmUyNTEzMRf8FiE=: ]] 00:28:25.128 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDhjMTkyNzdiZDUzNDk5Zjc0ZGJiYWJjNzBhNjZmZTRlMzM0ZmZmYTNkZTU0OWQyMThmZWU1NWQxZmUyNTEzMRf8FiE=: 00:28:25.128 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:28:25.128 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.128 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:25.128 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:25.128 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:25.128 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.128 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:25.128 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.128 12:27:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.386 nvme0n1 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGFlMmIzZmMzODA1ZWNhN2RjM2NiZGY3NTU0YTY0M2RhYzA2MzhhNDc1ZTY1MDcyP0w/ng==: 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGFlMmIzZmMzODA1ZWNhN2RjM2NiZGY3NTU0YTY0M2RhYzA2MzhhNDc1ZTY1MDcyP0w/ng==: 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: ]] 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.386 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.645 nvme0n1 00:28:25.645 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.645 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.645 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.645 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.645 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.645 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.645 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.645 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.645 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.645 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.645 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.645 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:25.645 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:28:25.645 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.645 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:25.645 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:25.645 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:25.645 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA3NGYzMjVlY2Y1NjZmZGZkMjFjNDYyOWZjNWM2MmHeZ/3p: 00:28:25.645 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGFkNzY0NmEzZGNlNGQ4ZDE3ODUwZGI0ZjNhYWJmOGFuLfZN: 00:28:25.645 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:25.645 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:25.645 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA3NGYzMjVlY2Y1NjZmZGZkMjFjNDYyOWZjNWM2MmHeZ/3p: 00:28:25.645 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGFkNzY0NmEzZGNlNGQ4ZDE3ODUwZGI0ZjNhYWJmOGFuLfZN: ]] 00:28:25.645 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGFkNzY0NmEzZGNlNGQ4ZDE3ODUwZGI0ZjNhYWJmOGFuLfZN: 00:28:25.645 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:28:25.645 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.645 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:25.645 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:25.645 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:25.645 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.645 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:25.645 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.645 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.645 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.645 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.645 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:25.645 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:25.645 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:25.645 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.645 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.645 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:25.645 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.645 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:25.645 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:25.645 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:25.645 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:25.646 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.646 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.646 nvme0n1 00:28:25.646 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.646 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.646 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.646 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.646 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.646 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.646 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.646 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.646 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.646 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.646 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.646 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:25.646 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:28:25.646 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.646 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:25.646 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:25.646 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:25.646 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmI4MzRjOTQwZDVhMzcwMWVlZTYxYjFlNTExY2UxODgzNDQ3N2M2ZjY0ZDgxNmExl6cJRg==: 00:28:25.646 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDljMWQ4MWJiOTY0NmQ3ZTE5OGZmZDkzMTQ1MDFhNGMny4oJ: 00:28:25.646 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:25.646 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:25.646 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmI4MzRjOTQwZDVhMzcwMWVlZTYxYjFlNTExY2UxODgzNDQ3N2M2ZjY0ZDgxNmExl6cJRg==: 00:28:25.646 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDljMWQ4MWJiOTY0NmQ3ZTE5OGZmZDkzMTQ1MDFhNGMny4oJ: ]] 00:28:25.646 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDljMWQ4MWJiOTY0NmQ3ZTE5OGZmZDkzMTQ1MDFhNGMny4oJ: 00:28:25.646 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:28:25.646 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.646 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:25.646 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:25.646 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:25.646 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.646 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:25.646 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.646 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.646 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.646 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.646 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:25.646 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:25.646 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:25.646 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.646 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.646 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:25.646 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.646 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:25.646 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:25.646 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:25.646 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:25.646 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.646 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.904 nvme0n1 00:28:25.905 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.905 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.905 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.905 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.905 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.905 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.905 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.905 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.905 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.905 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.905 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.905 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:25.905 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:28:25.905 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.905 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:25.905 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:25.905 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:25.905 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWNmZjU2ZTM5ZWZmMzE3ZGNjYmMyYmM2NTBkMWI2NDBjZWZhY2VkYTIzMGYyMWE5MjlkMDI4NzNiMzk2NTlkOHDyzYw=: 00:28:25.905 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:25.905 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:25.905 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:25.905 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWNmZjU2ZTM5ZWZmMzE3ZGNjYmMyYmM2NTBkMWI2NDBjZWZhY2VkYTIzMGYyMWE5MjlkMDI4NzNiMzk2NTlkOHDyzYw=: 00:28:25.905 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:25.905 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:28:25.905 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.905 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:25.905 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:25.905 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:25.905 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.905 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:25.905 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.905 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.905 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.905 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.905 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:25.905 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:25.905 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:25.905 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.905 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.905 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:25.905 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.905 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:25.905 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:25.905 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:25.905 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:25.905 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.905 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.905 nvme0n1 00:28:25.905 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.905 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.905 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.905 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.905 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.905 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.162 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.162 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:26.162 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.162 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.162 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.162 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:26.162 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:26.162 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:28:26.163 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.163 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:26.163 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:26.163 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:26.163 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y3NTYwN2YwNDJhZjIxZjNmMDllNDMyNGMyOTRhODlyenFt: 00:28:26.163 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDhjMTkyNzdiZDUzNDk5Zjc0ZGJiYWJjNzBhNjZmZTRlMzM0ZmZmYTNkZTU0OWQyMThmZWU1NWQxZmUyNTEzMRf8FiE=: 00:28:26.163 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:26.163 12:27:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y3NTYwN2YwNDJhZjIxZjNmMDllNDMyNGMyOTRhODlyenFt: 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDhjMTkyNzdiZDUzNDk5Zjc0ZGJiYWJjNzBhNjZmZTRlMzM0ZmZmYTNkZTU0OWQyMThmZWU1NWQxZmUyNTEzMRf8FiE=: ]] 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDhjMTkyNzdiZDUzNDk5Zjc0ZGJiYWJjNzBhNjZmZTRlMzM0ZmZmYTNkZTU0OWQyMThmZWU1NWQxZmUyNTEzMRf8FiE=: 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.727 nvme0n1 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGFlMmIzZmMzODA1ZWNhN2RjM2NiZGY3NTU0YTY0M2RhYzA2MzhhNDc1ZTY1MDcyP0w/ng==: 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGFlMmIzZmMzODA1ZWNhN2RjM2NiZGY3NTU0YTY0M2RhYzA2MzhhNDc1ZTY1MDcyP0w/ng==: 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: ]] 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.727 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.728 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.728 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:26.728 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:26.728 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:26.728 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:26.728 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.728 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.728 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:26.728 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.728 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:26.728 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:26.728 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:26.728 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:26.728 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.728 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.985 nvme0n1 00:28:26.985 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.985 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.985 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:26.985 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.985 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.985 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.985 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.985 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:26.985 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.985 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.985 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.985 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:26.985 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:28:26.985 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.985 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:26.985 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:26.985 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:26.985 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA3NGYzMjVlY2Y1NjZmZGZkMjFjNDYyOWZjNWM2MmHeZ/3p: 00:28:26.985 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGFkNzY0NmEzZGNlNGQ4ZDE3ODUwZGI0ZjNhYWJmOGFuLfZN: 00:28:26.985 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:26.985 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:26.985 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA3NGYzMjVlY2Y1NjZmZGZkMjFjNDYyOWZjNWM2MmHeZ/3p: 00:28:26.985 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGFkNzY0NmEzZGNlNGQ4ZDE3ODUwZGI0ZjNhYWJmOGFuLfZN: ]] 00:28:26.985 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGFkNzY0NmEzZGNlNGQ4ZDE3ODUwZGI0ZjNhYWJmOGFuLfZN: 00:28:26.985 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:28:26.985 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:26.985 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:26.985 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:26.985 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:26.985 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:26.985 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:26.985 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.985 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.985 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.985 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:26.985 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:26.985 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:26.985 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:26.985 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.985 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.985 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:26.985 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.985 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:26.985 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:26.985 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:26.985 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:26.985 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.985 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.243 nvme0n1 00:28:27.243 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.243 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.243 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.243 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.243 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.243 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.243 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.243 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.243 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.243 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.243 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.243 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.243 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:28:27.243 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.243 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:27.243 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:27.243 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:27.243 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmI4MzRjOTQwZDVhMzcwMWVlZTYxYjFlNTExY2UxODgzNDQ3N2M2ZjY0ZDgxNmExl6cJRg==: 00:28:27.243 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDljMWQ4MWJiOTY0NmQ3ZTE5OGZmZDkzMTQ1MDFhNGMny4oJ: 00:28:27.243 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:27.243 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:27.243 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmI4MzRjOTQwZDVhMzcwMWVlZTYxYjFlNTExY2UxODgzNDQ3N2M2ZjY0ZDgxNmExl6cJRg==: 00:28:27.243 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDljMWQ4MWJiOTY0NmQ3ZTE5OGZmZDkzMTQ1MDFhNGMny4oJ: ]] 00:28:27.243 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDljMWQ4MWJiOTY0NmQ3ZTE5OGZmZDkzMTQ1MDFhNGMny4oJ: 00:28:27.243 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:28:27.243 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.243 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:27.243 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:27.243 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:27.243 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.243 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:27.243 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.243 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.243 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.243 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.243 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:27.243 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:27.243 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:27.243 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.243 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.243 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:27.243 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.243 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:27.243 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:27.243 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:27.243 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:27.243 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.243 12:27:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.500 nvme0n1 00:28:27.500 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.500 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.500 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.500 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.500 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.500 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.500 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.500 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.500 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.500 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.500 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.500 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.500 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:28:27.500 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.500 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:27.500 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:27.500 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:27.500 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWNmZjU2ZTM5ZWZmMzE3ZGNjYmMyYmM2NTBkMWI2NDBjZWZhY2VkYTIzMGYyMWE5MjlkMDI4NzNiMzk2NTlkOHDyzYw=: 00:28:27.500 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:27.500 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:27.500 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:27.500 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWNmZjU2ZTM5ZWZmMzE3ZGNjYmMyYmM2NTBkMWI2NDBjZWZhY2VkYTIzMGYyMWE5MjlkMDI4NzNiMzk2NTlkOHDyzYw=: 00:28:27.500 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:27.500 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:28:27.500 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.500 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:27.500 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:27.500 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:27.501 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.501 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:27.501 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.501 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.501 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.501 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.501 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:27.501 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:27.501 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:27.501 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.501 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.501 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:27.501 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.501 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:27.501 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:27.501 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:27.501 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:27.501 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.501 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.501 nvme0n1 00:28:27.501 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.501 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.501 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.501 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.501 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.758 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.758 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.758 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.758 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.758 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.758 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.758 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:27.758 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.758 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:28:27.758 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.758 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:27.758 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:27.758 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:27.758 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y3NTYwN2YwNDJhZjIxZjNmMDllNDMyNGMyOTRhODlyenFt: 00:28:27.758 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDhjMTkyNzdiZDUzNDk5Zjc0ZGJiYWJjNzBhNjZmZTRlMzM0ZmZmYTNkZTU0OWQyMThmZWU1NWQxZmUyNTEzMRf8FiE=: 00:28:27.758 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:27.758 12:27:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:29.129 12:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y3NTYwN2YwNDJhZjIxZjNmMDllNDMyNGMyOTRhODlyenFt: 00:28:29.129 12:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDhjMTkyNzdiZDUzNDk5Zjc0ZGJiYWJjNzBhNjZmZTRlMzM0ZmZmYTNkZTU0OWQyMThmZWU1NWQxZmUyNTEzMRf8FiE=: ]] 00:28:29.129 12:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDhjMTkyNzdiZDUzNDk5Zjc0ZGJiYWJjNzBhNjZmZTRlMzM0ZmZmYTNkZTU0OWQyMThmZWU1NWQxZmUyNTEzMRf8FiE=: 00:28:29.129 12:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:28:29.129 12:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.129 12:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:29.129 12:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:29.129 12:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:29.129 12:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.129 12:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:29.129 12:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.129 12:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.129 12:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.129 12:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.129 12:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:29.129 12:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:29.129 12:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:29.129 12:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.129 12:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.129 12:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:29.129 12:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.129 12:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:29.129 12:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:29.129 12:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:29.129 12:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:29.129 12:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.129 12:27:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.695 nvme0n1 00:28:29.695 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.695 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.695 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.695 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.695 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.695 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.695 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.695 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.695 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.695 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.695 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.695 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.695 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:28:29.695 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.695 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:29.695 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:29.695 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:29.695 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGFlMmIzZmMzODA1ZWNhN2RjM2NiZGY3NTU0YTY0M2RhYzA2MzhhNDc1ZTY1MDcyP0w/ng==: 00:28:29.695 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: 00:28:29.695 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:29.695 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:29.696 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGFlMmIzZmMzODA1ZWNhN2RjM2NiZGY3NTU0YTY0M2RhYzA2MzhhNDc1ZTY1MDcyP0w/ng==: 00:28:29.696 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: ]] 00:28:29.696 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: 00:28:29.696 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:28:29.696 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.696 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:29.696 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:29.696 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:29.696 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.696 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:29.696 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.696 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.696 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.696 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.696 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:29.696 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:29.696 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:29.696 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.696 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.696 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:29.696 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.696 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:29.696 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:29.696 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:29.696 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:29.696 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.696 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.955 nvme0n1 00:28:29.955 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.955 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.955 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.955 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.955 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.955 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.955 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.955 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.955 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.955 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.955 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.955 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.955 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:28:29.955 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.955 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:29.955 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:29.955 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:29.955 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA3NGYzMjVlY2Y1NjZmZGZkMjFjNDYyOWZjNWM2MmHeZ/3p: 00:28:29.955 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGFkNzY0NmEzZGNlNGQ4ZDE3ODUwZGI0ZjNhYWJmOGFuLfZN: 00:28:29.955 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:29.955 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:29.955 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA3NGYzMjVlY2Y1NjZmZGZkMjFjNDYyOWZjNWM2MmHeZ/3p: 00:28:29.955 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGFkNzY0NmEzZGNlNGQ4ZDE3ODUwZGI0ZjNhYWJmOGFuLfZN: ]] 00:28:29.955 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGFkNzY0NmEzZGNlNGQ4ZDE3ODUwZGI0ZjNhYWJmOGFuLfZN: 00:28:29.955 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:28:29.955 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.955 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:29.955 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:29.955 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:29.955 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.955 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:29.955 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.955 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.955 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.955 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.955 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:29.955 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:29.955 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:29.955 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.955 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.955 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:29.955 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.955 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:29.955 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:29.955 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:29.955 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:29.955 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.955 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.214 nvme0n1 00:28:30.214 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.214 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.214 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.214 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.214 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.214 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.214 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.214 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.214 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.214 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.214 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.214 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.214 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:28:30.214 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.214 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:30.214 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:30.214 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:30.214 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmI4MzRjOTQwZDVhMzcwMWVlZTYxYjFlNTExY2UxODgzNDQ3N2M2ZjY0ZDgxNmExl6cJRg==: 00:28:30.214 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDljMWQ4MWJiOTY0NmQ3ZTE5OGZmZDkzMTQ1MDFhNGMny4oJ: 00:28:30.214 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:30.214 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:30.214 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmI4MzRjOTQwZDVhMzcwMWVlZTYxYjFlNTExY2UxODgzNDQ3N2M2ZjY0ZDgxNmExl6cJRg==: 00:28:30.214 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDljMWQ4MWJiOTY0NmQ3ZTE5OGZmZDkzMTQ1MDFhNGMny4oJ: ]] 00:28:30.214 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDljMWQ4MWJiOTY0NmQ3ZTE5OGZmZDkzMTQ1MDFhNGMny4oJ: 00:28:30.214 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:28:30.214 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.214 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:30.214 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:30.214 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:30.214 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.214 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:30.214 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.214 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.214 12:27:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.214 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.214 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:30.214 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:30.214 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:30.214 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.214 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.214 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:30.214 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.214 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:30.214 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:30.214 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:30.214 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:30.214 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.214 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.473 nvme0n1 00:28:30.473 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.473 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.473 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.473 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.473 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.473 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.473 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.473 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.473 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.473 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.473 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.473 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.473 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:28:30.473 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.473 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:30.473 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:30.473 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:30.473 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWNmZjU2ZTM5ZWZmMzE3ZGNjYmMyYmM2NTBkMWI2NDBjZWZhY2VkYTIzMGYyMWE5MjlkMDI4NzNiMzk2NTlkOHDyzYw=: 00:28:30.473 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:30.473 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:30.473 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:30.473 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWNmZjU2ZTM5ZWZmMzE3ZGNjYmMyYmM2NTBkMWI2NDBjZWZhY2VkYTIzMGYyMWE5MjlkMDI4NzNiMzk2NTlkOHDyzYw=: 00:28:30.473 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:30.473 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:28:30.473 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.473 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:30.473 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:30.473 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:30.473 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.473 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:30.473 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.473 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.473 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.473 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.473 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:30.473 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:30.473 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:30.473 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.473 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.473 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:30.473 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.473 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:30.473 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:30.473 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:30.473 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:30.473 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.473 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.039 nvme0n1 00:28:31.039 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.039 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.039 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.039 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.039 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.039 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.039 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.039 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.039 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.039 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.039 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.039 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:31.039 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.039 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:28:31.039 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.039 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:31.039 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:31.039 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:31.039 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y3NTYwN2YwNDJhZjIxZjNmMDllNDMyNGMyOTRhODlyenFt: 00:28:31.039 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDhjMTkyNzdiZDUzNDk5Zjc0ZGJiYWJjNzBhNjZmZTRlMzM0ZmZmYTNkZTU0OWQyMThmZWU1NWQxZmUyNTEzMRf8FiE=: 00:28:31.039 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:31.039 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:31.039 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y3NTYwN2YwNDJhZjIxZjNmMDllNDMyNGMyOTRhODlyenFt: 00:28:31.039 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDhjMTkyNzdiZDUzNDk5Zjc0ZGJiYWJjNzBhNjZmZTRlMzM0ZmZmYTNkZTU0OWQyMThmZWU1NWQxZmUyNTEzMRf8FiE=: ]] 00:28:31.039 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDhjMTkyNzdiZDUzNDk5Zjc0ZGJiYWJjNzBhNjZmZTRlMzM0ZmZmYTNkZTU0OWQyMThmZWU1NWQxZmUyNTEzMRf8FiE=: 00:28:31.039 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:28:31.039 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.039 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:31.039 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:31.039 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:31.039 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.039 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:31.039 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.039 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.039 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.039 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.039 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:31.039 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:31.039 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:31.039 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.039 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.039 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:31.039 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.039 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:31.039 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:31.039 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:31.039 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:31.039 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.039 12:27:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.297 nvme0n1 00:28:31.297 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.297 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.297 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.297 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.297 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.555 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.555 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.555 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.555 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.555 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.555 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.555 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.555 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:28:31.555 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.555 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:31.555 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:31.555 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:31.555 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGFlMmIzZmMzODA1ZWNhN2RjM2NiZGY3NTU0YTY0M2RhYzA2MzhhNDc1ZTY1MDcyP0w/ng==: 00:28:31.556 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: 00:28:31.556 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:31.556 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:31.556 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGFlMmIzZmMzODA1ZWNhN2RjM2NiZGY3NTU0YTY0M2RhYzA2MzhhNDc1ZTY1MDcyP0w/ng==: 00:28:31.556 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: ]] 00:28:31.556 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: 00:28:31.556 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:28:31.556 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.556 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:31.556 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:31.556 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:31.556 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.556 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:31.556 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.556 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.556 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.556 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.556 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:31.556 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:31.556 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:31.556 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.556 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.556 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:31.556 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.556 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:31.556 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:31.556 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:31.556 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:31.556 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.556 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.121 nvme0n1 00:28:32.121 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.121 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.121 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.121 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.121 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.121 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.121 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.121 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:32.121 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.121 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.121 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.121 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.121 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:28:32.121 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.121 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:32.121 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:32.121 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:32.121 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA3NGYzMjVlY2Y1NjZmZGZkMjFjNDYyOWZjNWM2MmHeZ/3p: 00:28:32.121 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGFkNzY0NmEzZGNlNGQ4ZDE3ODUwZGI0ZjNhYWJmOGFuLfZN: 00:28:32.121 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:32.121 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:32.121 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA3NGYzMjVlY2Y1NjZmZGZkMjFjNDYyOWZjNWM2MmHeZ/3p: 00:28:32.121 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGFkNzY0NmEzZGNlNGQ4ZDE3ODUwZGI0ZjNhYWJmOGFuLfZN: ]] 00:28:32.121 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGFkNzY0NmEzZGNlNGQ4ZDE3ODUwZGI0ZjNhYWJmOGFuLfZN: 00:28:32.121 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:28:32.121 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.121 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:32.121 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:32.121 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:32.121 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.121 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:32.121 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.121 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.121 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.121 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.121 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:32.121 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:32.121 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:32.121 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.122 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.122 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:32.122 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.122 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:32.122 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:32.122 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:32.122 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:32.122 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.122 12:27:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.410 nvme0n1 00:28:32.410 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.410 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.410 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.410 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.410 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.410 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.695 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.695 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:32.695 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.695 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.695 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.695 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.695 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:28:32.695 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.695 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:32.695 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:32.695 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:32.695 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmI4MzRjOTQwZDVhMzcwMWVlZTYxYjFlNTExY2UxODgzNDQ3N2M2ZjY0ZDgxNmExl6cJRg==: 00:28:32.695 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDljMWQ4MWJiOTY0NmQ3ZTE5OGZmZDkzMTQ1MDFhNGMny4oJ: 00:28:32.695 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:32.695 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:32.695 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmI4MzRjOTQwZDVhMzcwMWVlZTYxYjFlNTExY2UxODgzNDQ3N2M2ZjY0ZDgxNmExl6cJRg==: 00:28:32.695 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDljMWQ4MWJiOTY0NmQ3ZTE5OGZmZDkzMTQ1MDFhNGMny4oJ: ]] 00:28:32.695 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDljMWQ4MWJiOTY0NmQ3ZTE5OGZmZDkzMTQ1MDFhNGMny4oJ: 00:28:32.695 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:28:32.695 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.695 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:32.695 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:32.695 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:32.695 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.695 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:32.695 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.695 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.695 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.695 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.695 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:32.695 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:32.695 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:32.695 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.695 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.695 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:32.695 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.695 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:32.695 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:32.695 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:32.695 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:32.695 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.695 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.962 nvme0n1 00:28:32.963 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.963 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.963 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.963 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.963 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.221 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.221 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.221 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.221 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.221 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.221 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.221 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.221 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:28:33.221 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.221 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:33.221 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:33.221 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:33.221 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWNmZjU2ZTM5ZWZmMzE3ZGNjYmMyYmM2NTBkMWI2NDBjZWZhY2VkYTIzMGYyMWE5MjlkMDI4NzNiMzk2NTlkOHDyzYw=: 00:28:33.221 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:33.221 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:33.221 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:33.221 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWNmZjU2ZTM5ZWZmMzE3ZGNjYmMyYmM2NTBkMWI2NDBjZWZhY2VkYTIzMGYyMWE5MjlkMDI4NzNiMzk2NTlkOHDyzYw=: 00:28:33.221 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:33.221 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:28:33.221 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.221 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:33.221 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:33.221 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:33.221 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.221 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:33.221 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.221 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.221 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.221 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.221 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:33.221 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:33.221 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:33.221 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.221 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.221 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:33.221 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.221 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:33.221 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:33.221 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:33.221 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:33.221 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.221 12:27:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.785 nvme0n1 00:28:33.785 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.785 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:33.785 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.785 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:33.785 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.785 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.785 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.785 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.785 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.785 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.785 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.785 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:33.785 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:33.785 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.785 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:28:33.785 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.785 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:33.785 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:33.785 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:33.785 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y3NTYwN2YwNDJhZjIxZjNmMDllNDMyNGMyOTRhODlyenFt: 00:28:33.785 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDhjMTkyNzdiZDUzNDk5Zjc0ZGJiYWJjNzBhNjZmZTRlMzM0ZmZmYTNkZTU0OWQyMThmZWU1NWQxZmUyNTEzMRf8FiE=: 00:28:33.785 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:33.785 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:33.785 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y3NTYwN2YwNDJhZjIxZjNmMDllNDMyNGMyOTRhODlyenFt: 00:28:33.785 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDhjMTkyNzdiZDUzNDk5Zjc0ZGJiYWJjNzBhNjZmZTRlMzM0ZmZmYTNkZTU0OWQyMThmZWU1NWQxZmUyNTEzMRf8FiE=: ]] 00:28:33.785 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDhjMTkyNzdiZDUzNDk5Zjc0ZGJiYWJjNzBhNjZmZTRlMzM0ZmZmYTNkZTU0OWQyMThmZWU1NWQxZmUyNTEzMRf8FiE=: 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.786 nvme0n1 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGFlMmIzZmMzODA1ZWNhN2RjM2NiZGY3NTU0YTY0M2RhYzA2MzhhNDc1ZTY1MDcyP0w/ng==: 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGFlMmIzZmMzODA1ZWNhN2RjM2NiZGY3NTU0YTY0M2RhYzA2MzhhNDc1ZTY1MDcyP0w/ng==: 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: ]] 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.786 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.044 nvme0n1 00:28:34.044 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.044 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.044 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.044 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.044 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.044 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.044 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.044 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.044 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.044 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.044 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.044 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.044 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:28:34.044 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.044 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:34.044 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:34.044 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:34.044 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA3NGYzMjVlY2Y1NjZmZGZkMjFjNDYyOWZjNWM2MmHeZ/3p: 00:28:34.044 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGFkNzY0NmEzZGNlNGQ4ZDE3ODUwZGI0ZjNhYWJmOGFuLfZN: 00:28:34.044 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:34.044 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:34.044 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA3NGYzMjVlY2Y1NjZmZGZkMjFjNDYyOWZjNWM2MmHeZ/3p: 00:28:34.044 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGFkNzY0NmEzZGNlNGQ4ZDE3ODUwZGI0ZjNhYWJmOGFuLfZN: ]] 00:28:34.044 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGFkNzY0NmEzZGNlNGQ4ZDE3ODUwZGI0ZjNhYWJmOGFuLfZN: 00:28:34.044 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:28:34.044 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.044 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:34.044 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:34.044 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:34.044 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.044 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:34.044 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.044 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.044 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.044 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.044 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:34.044 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:34.044 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:34.044 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.044 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.044 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:34.044 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.044 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:34.044 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:34.044 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:34.044 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:34.044 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.044 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.044 nvme0n1 00:28:34.044 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.044 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.044 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.044 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.045 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.045 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.045 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.045 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.045 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.045 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.045 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.045 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.045 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:28:34.045 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.045 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:34.045 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:34.045 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:34.045 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmI4MzRjOTQwZDVhMzcwMWVlZTYxYjFlNTExY2UxODgzNDQ3N2M2ZjY0ZDgxNmExl6cJRg==: 00:28:34.045 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDljMWQ4MWJiOTY0NmQ3ZTE5OGZmZDkzMTQ1MDFhNGMny4oJ: 00:28:34.045 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:34.045 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:34.045 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmI4MzRjOTQwZDVhMzcwMWVlZTYxYjFlNTExY2UxODgzNDQ3N2M2ZjY0ZDgxNmExl6cJRg==: 00:28:34.045 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDljMWQ4MWJiOTY0NmQ3ZTE5OGZmZDkzMTQ1MDFhNGMny4oJ: ]] 00:28:34.045 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDljMWQ4MWJiOTY0NmQ3ZTE5OGZmZDkzMTQ1MDFhNGMny4oJ: 00:28:34.045 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:28:34.045 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.045 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:34.045 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:34.045 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:34.045 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.045 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:34.045 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.045 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.045 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.045 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.045 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:34.045 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:34.045 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:34.045 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.045 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.045 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:34.045 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.045 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:34.045 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:34.045 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:34.045 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:34.045 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.045 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.304 nvme0n1 00:28:34.304 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.304 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.304 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.304 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.304 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.304 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.304 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.304 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.304 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.304 12:27:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWNmZjU2ZTM5ZWZmMzE3ZGNjYmMyYmM2NTBkMWI2NDBjZWZhY2VkYTIzMGYyMWE5MjlkMDI4NzNiMzk2NTlkOHDyzYw=: 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWNmZjU2ZTM5ZWZmMzE3ZGNjYmMyYmM2NTBkMWI2NDBjZWZhY2VkYTIzMGYyMWE5MjlkMDI4NzNiMzk2NTlkOHDyzYw=: 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.304 nvme0n1 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y3NTYwN2YwNDJhZjIxZjNmMDllNDMyNGMyOTRhODlyenFt: 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDhjMTkyNzdiZDUzNDk5Zjc0ZGJiYWJjNzBhNjZmZTRlMzM0ZmZmYTNkZTU0OWQyMThmZWU1NWQxZmUyNTEzMRf8FiE=: 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y3NTYwN2YwNDJhZjIxZjNmMDllNDMyNGMyOTRhODlyenFt: 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDhjMTkyNzdiZDUzNDk5Zjc0ZGJiYWJjNzBhNjZmZTRlMzM0ZmZmYTNkZTU0OWQyMThmZWU1NWQxZmUyNTEzMRf8FiE=: ]] 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDhjMTkyNzdiZDUzNDk5Zjc0ZGJiYWJjNzBhNjZmZTRlMzM0ZmZmYTNkZTU0OWQyMThmZWU1NWQxZmUyNTEzMRf8FiE=: 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.304 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.562 nvme0n1 00:28:34.562 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.562 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.562 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.562 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.562 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.562 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.562 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.562 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.562 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.562 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.562 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.562 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.562 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:28:34.562 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.563 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:34.563 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:34.563 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:34.563 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGFlMmIzZmMzODA1ZWNhN2RjM2NiZGY3NTU0YTY0M2RhYzA2MzhhNDc1ZTY1MDcyP0w/ng==: 00:28:34.563 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: 00:28:34.563 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:34.563 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:34.563 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGFlMmIzZmMzODA1ZWNhN2RjM2NiZGY3NTU0YTY0M2RhYzA2MzhhNDc1ZTY1MDcyP0w/ng==: 00:28:34.563 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: ]] 00:28:34.563 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: 00:28:34.563 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:28:34.563 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.563 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:34.563 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:34.563 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:34.563 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.563 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:34.563 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.563 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.563 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.563 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.563 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:34.563 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:34.563 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:34.563 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.563 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.563 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:34.563 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.563 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:34.563 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:34.563 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:34.563 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:34.563 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.563 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.822 nvme0n1 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA3NGYzMjVlY2Y1NjZmZGZkMjFjNDYyOWZjNWM2MmHeZ/3p: 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGFkNzY0NmEzZGNlNGQ4ZDE3ODUwZGI0ZjNhYWJmOGFuLfZN: 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA3NGYzMjVlY2Y1NjZmZGZkMjFjNDYyOWZjNWM2MmHeZ/3p: 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGFkNzY0NmEzZGNlNGQ4ZDE3ODUwZGI0ZjNhYWJmOGFuLfZN: ]] 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGFkNzY0NmEzZGNlNGQ4ZDE3ODUwZGI0ZjNhYWJmOGFuLfZN: 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.822 nvme0n1 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmI4MzRjOTQwZDVhMzcwMWVlZTYxYjFlNTExY2UxODgzNDQ3N2M2ZjY0ZDgxNmExl6cJRg==: 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDljMWQ4MWJiOTY0NmQ3ZTE5OGZmZDkzMTQ1MDFhNGMny4oJ: 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmI4MzRjOTQwZDVhMzcwMWVlZTYxYjFlNTExY2UxODgzNDQ3N2M2ZjY0ZDgxNmExl6cJRg==: 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDljMWQ4MWJiOTY0NmQ3ZTE5OGZmZDkzMTQ1MDFhNGMny4oJ: ]] 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDljMWQ4MWJiOTY0NmQ3ZTE5OGZmZDkzMTQ1MDFhNGMny4oJ: 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:34.822 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:34.823 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.823 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:34.823 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.823 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.823 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.823 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.823 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:34.823 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:34.823 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:34.823 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.823 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.823 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:34.823 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.823 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:34.823 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:34.823 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:34.823 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:34.823 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.823 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.081 nvme0n1 00:28:35.081 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.081 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.081 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.081 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.081 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.081 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.081 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.081 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.081 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.081 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.081 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.081 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.081 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:28:35.081 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.081 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:35.081 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:35.081 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:35.081 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWNmZjU2ZTM5ZWZmMzE3ZGNjYmMyYmM2NTBkMWI2NDBjZWZhY2VkYTIzMGYyMWE5MjlkMDI4NzNiMzk2NTlkOHDyzYw=: 00:28:35.081 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:35.081 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:35.081 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:35.081 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWNmZjU2ZTM5ZWZmMzE3ZGNjYmMyYmM2NTBkMWI2NDBjZWZhY2VkYTIzMGYyMWE5MjlkMDI4NzNiMzk2NTlkOHDyzYw=: 00:28:35.081 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:35.081 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:28:35.081 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.081 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:35.081 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:35.081 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:35.081 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.081 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:35.081 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.081 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.081 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.081 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.082 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:35.082 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:35.082 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:35.082 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.082 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.082 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:35.082 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.082 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:35.082 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:35.082 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:35.082 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:35.082 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.082 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.082 nvme0n1 00:28:35.082 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.082 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.082 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.082 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.082 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.340 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.340 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.340 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.340 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.340 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.340 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.340 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:35.340 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.340 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:28:35.340 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.340 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:35.340 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:35.340 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:35.340 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y3NTYwN2YwNDJhZjIxZjNmMDllNDMyNGMyOTRhODlyenFt: 00:28:35.340 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDhjMTkyNzdiZDUzNDk5Zjc0ZGJiYWJjNzBhNjZmZTRlMzM0ZmZmYTNkZTU0OWQyMThmZWU1NWQxZmUyNTEzMRf8FiE=: 00:28:35.340 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:35.340 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:35.340 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y3NTYwN2YwNDJhZjIxZjNmMDllNDMyNGMyOTRhODlyenFt: 00:28:35.340 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDhjMTkyNzdiZDUzNDk5Zjc0ZGJiYWJjNzBhNjZmZTRlMzM0ZmZmYTNkZTU0OWQyMThmZWU1NWQxZmUyNTEzMRf8FiE=: ]] 00:28:35.340 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDhjMTkyNzdiZDUzNDk5Zjc0ZGJiYWJjNzBhNjZmZTRlMzM0ZmZmYTNkZTU0OWQyMThmZWU1NWQxZmUyNTEzMRf8FiE=: 00:28:35.340 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:28:35.340 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.340 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:35.340 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:35.340 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:35.340 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.340 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:35.340 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.340 12:27:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.340 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.340 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.340 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:35.340 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:35.340 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:35.340 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.340 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.340 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:35.340 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.340 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:35.340 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:35.340 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:35.340 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:35.340 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.340 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.340 nvme0n1 00:28:35.340 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.340 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.340 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.340 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.340 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.340 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.340 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.340 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.340 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.340 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.598 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.598 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.598 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:28:35.598 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.598 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:35.598 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:35.598 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:35.598 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGFlMmIzZmMzODA1ZWNhN2RjM2NiZGY3NTU0YTY0M2RhYzA2MzhhNDc1ZTY1MDcyP0w/ng==: 00:28:35.598 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: 00:28:35.598 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:35.598 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:35.598 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGFlMmIzZmMzODA1ZWNhN2RjM2NiZGY3NTU0YTY0M2RhYzA2MzhhNDc1ZTY1MDcyP0w/ng==: 00:28:35.598 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: ]] 00:28:35.598 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: 00:28:35.598 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:28:35.598 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.598 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:35.598 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:35.598 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:35.598 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.598 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:35.598 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.598 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.598 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.598 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.598 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:35.598 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:35.598 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.599 nvme0n1 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA3NGYzMjVlY2Y1NjZmZGZkMjFjNDYyOWZjNWM2MmHeZ/3p: 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGFkNzY0NmEzZGNlNGQ4ZDE3ODUwZGI0ZjNhYWJmOGFuLfZN: 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA3NGYzMjVlY2Y1NjZmZGZkMjFjNDYyOWZjNWM2MmHeZ/3p: 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGFkNzY0NmEzZGNlNGQ4ZDE3ODUwZGI0ZjNhYWJmOGFuLfZN: ]] 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGFkNzY0NmEzZGNlNGQ4ZDE3ODUwZGI0ZjNhYWJmOGFuLfZN: 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.599 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.857 nvme0n1 00:28:35.857 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.857 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.857 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.857 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.857 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.857 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.857 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.857 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.857 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.857 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.857 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.857 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.857 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:28:35.857 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.857 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:35.857 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:35.857 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:35.857 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmI4MzRjOTQwZDVhMzcwMWVlZTYxYjFlNTExY2UxODgzNDQ3N2M2ZjY0ZDgxNmExl6cJRg==: 00:28:35.857 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDljMWQ4MWJiOTY0NmQ3ZTE5OGZmZDkzMTQ1MDFhNGMny4oJ: 00:28:35.857 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:35.857 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:35.857 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmI4MzRjOTQwZDVhMzcwMWVlZTYxYjFlNTExY2UxODgzNDQ3N2M2ZjY0ZDgxNmExl6cJRg==: 00:28:35.857 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDljMWQ4MWJiOTY0NmQ3ZTE5OGZmZDkzMTQ1MDFhNGMny4oJ: ]] 00:28:35.857 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDljMWQ4MWJiOTY0NmQ3ZTE5OGZmZDkzMTQ1MDFhNGMny4oJ: 00:28:35.857 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:28:35.857 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.857 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:35.857 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:35.857 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:35.857 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.857 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:35.857 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.857 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.857 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.857 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.857 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:35.857 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:35.857 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:35.857 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.857 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.857 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:35.857 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.857 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:35.857 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:35.857 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:35.857 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:35.857 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.857 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.116 nvme0n1 00:28:36.116 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.116 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:36.116 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.116 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:36.116 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.116 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.116 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.116 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:36.116 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.116 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.116 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.116 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:36.116 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:28:36.116 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:36.116 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:36.116 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:36.116 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:36.116 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWNmZjU2ZTM5ZWZmMzE3ZGNjYmMyYmM2NTBkMWI2NDBjZWZhY2VkYTIzMGYyMWE5MjlkMDI4NzNiMzk2NTlkOHDyzYw=: 00:28:36.116 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:36.116 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:36.116 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:36.116 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWNmZjU2ZTM5ZWZmMzE3ZGNjYmMyYmM2NTBkMWI2NDBjZWZhY2VkYTIzMGYyMWE5MjlkMDI4NzNiMzk2NTlkOHDyzYw=: 00:28:36.116 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:36.116 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:28:36.116 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:36.116 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:36.116 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:36.116 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:36.116 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:36.116 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:36.116 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.116 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.116 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.116 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:36.116 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:36.116 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:36.116 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:36.116 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:36.116 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:36.116 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:36.116 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:36.116 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:36.116 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:36.116 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:36.116 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:36.116 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.116 12:27:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.374 nvme0n1 00:28:36.374 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.374 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:36.374 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:36.374 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.374 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.374 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.374 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.374 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:36.374 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.374 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.374 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.374 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:36.374 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:36.374 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:28:36.374 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:36.374 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:36.374 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:36.374 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:36.374 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y3NTYwN2YwNDJhZjIxZjNmMDllNDMyNGMyOTRhODlyenFt: 00:28:36.374 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDhjMTkyNzdiZDUzNDk5Zjc0ZGJiYWJjNzBhNjZmZTRlMzM0ZmZmYTNkZTU0OWQyMThmZWU1NWQxZmUyNTEzMRf8FiE=: 00:28:36.374 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:36.374 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:36.374 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y3NTYwN2YwNDJhZjIxZjNmMDllNDMyNGMyOTRhODlyenFt: 00:28:36.374 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDhjMTkyNzdiZDUzNDk5Zjc0ZGJiYWJjNzBhNjZmZTRlMzM0ZmZmYTNkZTU0OWQyMThmZWU1NWQxZmUyNTEzMRf8FiE=: ]] 00:28:36.374 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDhjMTkyNzdiZDUzNDk5Zjc0ZGJiYWJjNzBhNjZmZTRlMzM0ZmZmYTNkZTU0OWQyMThmZWU1NWQxZmUyNTEzMRf8FiE=: 00:28:36.374 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:28:36.374 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:36.374 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:36.374 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:36.374 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:36.374 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:36.374 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:36.374 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.374 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.374 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.374 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:36.374 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:36.374 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:36.374 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:36.374 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:36.374 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:36.374 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:36.374 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:36.374 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:36.374 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:36.374 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:36.374 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:36.374 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.374 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.631 nvme0n1 00:28:36.631 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.631 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:36.631 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:36.631 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.631 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.888 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.888 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.888 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:36.888 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.888 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.888 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.888 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:36.888 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:36.888 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:36.888 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:36.888 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:36.888 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:36.888 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGFlMmIzZmMzODA1ZWNhN2RjM2NiZGY3NTU0YTY0M2RhYzA2MzhhNDc1ZTY1MDcyP0w/ng==: 00:28:36.888 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: 00:28:36.888 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:36.888 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:36.888 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGFlMmIzZmMzODA1ZWNhN2RjM2NiZGY3NTU0YTY0M2RhYzA2MzhhNDc1ZTY1MDcyP0w/ng==: 00:28:36.889 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: ]] 00:28:36.889 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: 00:28:36.889 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:28:36.889 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:36.889 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:36.889 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:36.889 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:36.889 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:36.889 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:36.889 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.889 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.889 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.889 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:36.889 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:36.889 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:36.889 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:36.889 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:36.889 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:36.889 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:36.889 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:36.889 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:36.889 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:36.889 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:36.889 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:36.889 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.889 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.147 nvme0n1 00:28:37.147 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.147 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.147 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.147 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.147 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.147 12:27:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.147 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.147 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.147 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.147 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.404 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.404 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.404 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:37.404 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.404 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:37.405 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:37.405 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:37.405 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA3NGYzMjVlY2Y1NjZmZGZkMjFjNDYyOWZjNWM2MmHeZ/3p: 00:28:37.405 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGFkNzY0NmEzZGNlNGQ4ZDE3ODUwZGI0ZjNhYWJmOGFuLfZN: 00:28:37.405 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:37.405 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:37.405 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA3NGYzMjVlY2Y1NjZmZGZkMjFjNDYyOWZjNWM2MmHeZ/3p: 00:28:37.405 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGFkNzY0NmEzZGNlNGQ4ZDE3ODUwZGI0ZjNhYWJmOGFuLfZN: ]] 00:28:37.405 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGFkNzY0NmEzZGNlNGQ4ZDE3ODUwZGI0ZjNhYWJmOGFuLfZN: 00:28:37.405 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:28:37.405 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.405 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:37.405 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:37.405 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:37.405 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.405 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:37.405 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.405 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.405 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.405 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.405 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:37.405 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:37.405 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:37.405 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.405 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.405 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:37.405 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.405 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:37.405 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:37.405 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:37.405 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:37.405 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.405 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.663 nvme0n1 00:28:37.663 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.663 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.663 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.663 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.663 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.663 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.663 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.663 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.663 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.663 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.663 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.663 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.663 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:37.663 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.663 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:37.663 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:37.663 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:37.663 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmI4MzRjOTQwZDVhMzcwMWVlZTYxYjFlNTExY2UxODgzNDQ3N2M2ZjY0ZDgxNmExl6cJRg==: 00:28:37.663 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDljMWQ4MWJiOTY0NmQ3ZTE5OGZmZDkzMTQ1MDFhNGMny4oJ: 00:28:37.663 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:37.663 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:37.663 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmI4MzRjOTQwZDVhMzcwMWVlZTYxYjFlNTExY2UxODgzNDQ3N2M2ZjY0ZDgxNmExl6cJRg==: 00:28:37.663 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDljMWQ4MWJiOTY0NmQ3ZTE5OGZmZDkzMTQ1MDFhNGMny4oJ: ]] 00:28:37.663 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDljMWQ4MWJiOTY0NmQ3ZTE5OGZmZDkzMTQ1MDFhNGMny4oJ: 00:28:37.663 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:28:37.663 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.663 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:37.663 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:37.663 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:37.663 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.663 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:37.663 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.663 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.663 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.663 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.663 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:37.663 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:37.663 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:37.663 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.663 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.663 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:37.663 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.663 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:37.663 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:37.663 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:37.663 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:37.663 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.663 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.229 nvme0n1 00:28:38.229 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.229 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.229 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.229 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:38.229 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.229 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.229 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.229 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:38.229 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.229 12:27:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.229 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.229 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:38.229 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:38.229 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.229 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:38.229 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:38.229 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:38.229 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWNmZjU2ZTM5ZWZmMzE3ZGNjYmMyYmM2NTBkMWI2NDBjZWZhY2VkYTIzMGYyMWE5MjlkMDI4NzNiMzk2NTlkOHDyzYw=: 00:28:38.229 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:38.229 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:38.229 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:38.229 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWNmZjU2ZTM5ZWZmMzE3ZGNjYmMyYmM2NTBkMWI2NDBjZWZhY2VkYTIzMGYyMWE5MjlkMDI4NzNiMzk2NTlkOHDyzYw=: 00:28:38.229 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:38.229 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:28:38.229 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:38.229 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:38.229 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:38.229 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:38.229 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:38.229 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:38.229 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.229 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.229 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.229 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:38.229 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:38.229 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:38.229 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:38.229 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:38.229 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:38.229 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:38.229 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:38.229 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:38.229 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:38.229 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:38.230 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:38.230 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.230 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.796 nvme0n1 00:28:38.796 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.796 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.796 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:38.796 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.796 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.796 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.796 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.796 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:38.796 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.796 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.796 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.796 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:38.796 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:38.796 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:38.796 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.796 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:38.796 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:38.796 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:38.796 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y3NTYwN2YwNDJhZjIxZjNmMDllNDMyNGMyOTRhODlyenFt: 00:28:38.796 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDhjMTkyNzdiZDUzNDk5Zjc0ZGJiYWJjNzBhNjZmZTRlMzM0ZmZmYTNkZTU0OWQyMThmZWU1NWQxZmUyNTEzMRf8FiE=: 00:28:38.796 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:38.796 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:38.796 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y3NTYwN2YwNDJhZjIxZjNmMDllNDMyNGMyOTRhODlyenFt: 00:28:38.796 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDhjMTkyNzdiZDUzNDk5Zjc0ZGJiYWJjNzBhNjZmZTRlMzM0ZmZmYTNkZTU0OWQyMThmZWU1NWQxZmUyNTEzMRf8FiE=: ]] 00:28:38.796 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDhjMTkyNzdiZDUzNDk5Zjc0ZGJiYWJjNzBhNjZmZTRlMzM0ZmZmYTNkZTU0OWQyMThmZWU1NWQxZmUyNTEzMRf8FiE=: 00:28:38.796 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:28:38.796 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:38.796 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:38.796 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:38.796 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:38.796 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:38.796 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:38.796 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.796 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.796 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.796 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:38.796 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:38.796 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:38.796 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:38.796 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:38.796 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:38.796 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:38.796 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:38.796 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:38.796 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:38.796 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:38.796 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:38.796 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.796 12:27:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.361 nvme0n1 00:28:39.361 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.362 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.362 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.362 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.362 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.362 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.362 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.362 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.362 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.362 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.362 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.362 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:39.362 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:39.362 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.362 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:39.362 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:39.362 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:39.362 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGFlMmIzZmMzODA1ZWNhN2RjM2NiZGY3NTU0YTY0M2RhYzA2MzhhNDc1ZTY1MDcyP0w/ng==: 00:28:39.362 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: 00:28:39.362 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:39.362 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:39.362 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGFlMmIzZmMzODA1ZWNhN2RjM2NiZGY3NTU0YTY0M2RhYzA2MzhhNDc1ZTY1MDcyP0w/ng==: 00:28:39.362 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: ]] 00:28:39.362 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: 00:28:39.362 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:28:39.362 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.362 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:39.362 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:39.362 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:39.362 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.362 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:39.362 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.362 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.362 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.362 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.362 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:39.362 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:39.362 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:39.362 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.362 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.362 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:39.362 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.362 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:39.362 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:39.362 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:39.362 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:39.362 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.362 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.926 nvme0n1 00:28:39.926 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.926 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.926 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.926 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.926 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.926 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.926 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.926 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.926 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.926 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.926 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.926 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:39.926 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:39.926 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.926 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:39.926 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:39.926 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:39.926 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA3NGYzMjVlY2Y1NjZmZGZkMjFjNDYyOWZjNWM2MmHeZ/3p: 00:28:39.926 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGFkNzY0NmEzZGNlNGQ4ZDE3ODUwZGI0ZjNhYWJmOGFuLfZN: 00:28:39.926 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:39.926 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:39.926 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA3NGYzMjVlY2Y1NjZmZGZkMjFjNDYyOWZjNWM2MmHeZ/3p: 00:28:39.926 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGFkNzY0NmEzZGNlNGQ4ZDE3ODUwZGI0ZjNhYWJmOGFuLfZN: ]] 00:28:39.926 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGFkNzY0NmEzZGNlNGQ4ZDE3ODUwZGI0ZjNhYWJmOGFuLfZN: 00:28:39.926 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:28:39.926 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.926 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:39.926 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:39.926 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:39.926 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.926 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:39.926 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.926 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.926 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.926 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.926 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:39.926 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:39.926 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:39.926 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.926 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.926 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:39.926 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.926 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:39.926 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:39.926 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:39.926 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:39.926 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.926 12:27:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.860 nvme0n1 00:28:40.860 12:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.860 12:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.860 12:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.860 12:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.860 12:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.860 12:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.860 12:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.860 12:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.860 12:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.860 12:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.860 12:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.860 12:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.860 12:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:40.860 12:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.860 12:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:40.860 12:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:40.860 12:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:40.860 12:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmI4MzRjOTQwZDVhMzcwMWVlZTYxYjFlNTExY2UxODgzNDQ3N2M2ZjY0ZDgxNmExl6cJRg==: 00:28:40.860 12:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDljMWQ4MWJiOTY0NmQ3ZTE5OGZmZDkzMTQ1MDFhNGMny4oJ: 00:28:40.860 12:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:40.860 12:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:40.860 12:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmI4MzRjOTQwZDVhMzcwMWVlZTYxYjFlNTExY2UxODgzNDQ3N2M2ZjY0ZDgxNmExl6cJRg==: 00:28:40.860 12:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDljMWQ4MWJiOTY0NmQ3ZTE5OGZmZDkzMTQ1MDFhNGMny4oJ: ]] 00:28:40.860 12:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDljMWQ4MWJiOTY0NmQ3ZTE5OGZmZDkzMTQ1MDFhNGMny4oJ: 00:28:40.860 12:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:28:40.860 12:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.860 12:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:40.860 12:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:40.860 12:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:40.860 12:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.860 12:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:40.860 12:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.860 12:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.860 12:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.860 12:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.860 12:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:40.860 12:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:40.860 12:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:40.860 12:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.860 12:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.860 12:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:40.860 12:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.860 12:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:40.860 12:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:40.860 12:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:40.860 12:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:40.860 12:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.860 12:27:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.426 nvme0n1 00:28:41.426 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.426 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.426 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.426 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.426 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.426 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.426 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.426 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.426 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.426 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.426 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.426 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.426 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:41.426 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.426 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:41.426 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:41.426 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:41.426 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWNmZjU2ZTM5ZWZmMzE3ZGNjYmMyYmM2NTBkMWI2NDBjZWZhY2VkYTIzMGYyMWE5MjlkMDI4NzNiMzk2NTlkOHDyzYw=: 00:28:41.426 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:41.426 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:41.426 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:41.426 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWNmZjU2ZTM5ZWZmMzE3ZGNjYmMyYmM2NTBkMWI2NDBjZWZhY2VkYTIzMGYyMWE5MjlkMDI4NzNiMzk2NTlkOHDyzYw=: 00:28:41.426 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:41.426 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:28:41.426 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.426 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:41.426 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:41.426 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:41.426 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.426 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:41.426 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.426 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.426 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.426 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.426 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:41.426 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:41.426 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:41.426 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.426 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.426 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:41.426 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.426 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:41.426 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:41.426 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:41.426 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:41.426 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.426 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.993 nvme0n1 00:28:41.993 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.993 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.993 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.993 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.993 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.993 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.993 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.993 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.993 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.993 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.993 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.993 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:41.993 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:41.993 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.993 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:28:41.993 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.993 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:41.993 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:41.993 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:41.993 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y3NTYwN2YwNDJhZjIxZjNmMDllNDMyNGMyOTRhODlyenFt: 00:28:41.993 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDhjMTkyNzdiZDUzNDk5Zjc0ZGJiYWJjNzBhNjZmZTRlMzM0ZmZmYTNkZTU0OWQyMThmZWU1NWQxZmUyNTEzMRf8FiE=: 00:28:41.994 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:41.994 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:41.994 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y3NTYwN2YwNDJhZjIxZjNmMDllNDMyNGMyOTRhODlyenFt: 00:28:41.994 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDhjMTkyNzdiZDUzNDk5Zjc0ZGJiYWJjNzBhNjZmZTRlMzM0ZmZmYTNkZTU0OWQyMThmZWU1NWQxZmUyNTEzMRf8FiE=: ]] 00:28:41.994 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDhjMTkyNzdiZDUzNDk5Zjc0ZGJiYWJjNzBhNjZmZTRlMzM0ZmZmYTNkZTU0OWQyMThmZWU1NWQxZmUyNTEzMRf8FiE=: 00:28:41.994 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:28:41.994 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.994 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:41.994 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:41.994 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:41.994 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.994 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:41.994 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.994 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.994 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.994 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.994 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:41.994 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:41.994 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:41.994 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.994 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.994 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:41.994 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.994 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:41.994 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:41.994 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:41.994 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:41.994 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.994 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.252 nvme0n1 00:28:42.252 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.252 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.252 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.252 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.252 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.252 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.252 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.252 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.252 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.252 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.252 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.252 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.252 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:28:42.252 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.252 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:42.252 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:42.252 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:42.252 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGFlMmIzZmMzODA1ZWNhN2RjM2NiZGY3NTU0YTY0M2RhYzA2MzhhNDc1ZTY1MDcyP0w/ng==: 00:28:42.252 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: 00:28:42.252 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:42.252 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:42.252 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGFlMmIzZmMzODA1ZWNhN2RjM2NiZGY3NTU0YTY0M2RhYzA2MzhhNDc1ZTY1MDcyP0w/ng==: 00:28:42.252 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: ]] 00:28:42.252 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: 00:28:42.252 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:28:42.252 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.252 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:42.252 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:42.252 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:42.252 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.252 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:42.252 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.252 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.252 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.252 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.252 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:42.252 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:42.252 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:42.252 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.252 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.252 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:42.252 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.252 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:42.252 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:42.252 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:42.252 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:42.252 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.252 12:27:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.252 nvme0n1 00:28:42.252 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.252 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.252 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.252 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.253 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.253 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.253 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.253 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.253 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.253 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.510 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.510 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.510 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:28:42.510 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.510 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:42.510 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:42.510 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:42.510 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA3NGYzMjVlY2Y1NjZmZGZkMjFjNDYyOWZjNWM2MmHeZ/3p: 00:28:42.510 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGFkNzY0NmEzZGNlNGQ4ZDE3ODUwZGI0ZjNhYWJmOGFuLfZN: 00:28:42.510 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:42.510 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:42.510 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA3NGYzMjVlY2Y1NjZmZGZkMjFjNDYyOWZjNWM2MmHeZ/3p: 00:28:42.510 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGFkNzY0NmEzZGNlNGQ4ZDE3ODUwZGI0ZjNhYWJmOGFuLfZN: ]] 00:28:42.510 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGFkNzY0NmEzZGNlNGQ4ZDE3ODUwZGI0ZjNhYWJmOGFuLfZN: 00:28:42.510 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:28:42.510 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.510 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:42.510 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:42.510 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:42.510 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.510 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:42.510 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.510 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.510 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.510 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.510 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:42.510 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:42.510 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:42.510 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.510 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.510 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.511 nvme0n1 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmI4MzRjOTQwZDVhMzcwMWVlZTYxYjFlNTExY2UxODgzNDQ3N2M2ZjY0ZDgxNmExl6cJRg==: 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDljMWQ4MWJiOTY0NmQ3ZTE5OGZmZDkzMTQ1MDFhNGMny4oJ: 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmI4MzRjOTQwZDVhMzcwMWVlZTYxYjFlNTExY2UxODgzNDQ3N2M2ZjY0ZDgxNmExl6cJRg==: 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDljMWQ4MWJiOTY0NmQ3ZTE5OGZmZDkzMTQ1MDFhNGMny4oJ: ]] 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDljMWQ4MWJiOTY0NmQ3ZTE5OGZmZDkzMTQ1MDFhNGMny4oJ: 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.511 nvme0n1 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.511 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.769 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.769 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.769 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.769 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.769 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.769 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.769 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.769 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:28:42.769 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.769 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:42.769 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:42.769 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:42.769 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWNmZjU2ZTM5ZWZmMzE3ZGNjYmMyYmM2NTBkMWI2NDBjZWZhY2VkYTIzMGYyMWE5MjlkMDI4NzNiMzk2NTlkOHDyzYw=: 00:28:42.769 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:42.769 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:42.769 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:42.769 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWNmZjU2ZTM5ZWZmMzE3ZGNjYmMyYmM2NTBkMWI2NDBjZWZhY2VkYTIzMGYyMWE5MjlkMDI4NzNiMzk2NTlkOHDyzYw=: 00:28:42.769 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:42.769 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:28:42.769 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.769 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:42.769 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:42.769 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:42.769 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.769 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:42.769 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.769 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.769 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.769 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.769 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:42.769 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:42.769 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:42.769 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.769 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.769 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:42.769 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.770 nvme0n1 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y3NTYwN2YwNDJhZjIxZjNmMDllNDMyNGMyOTRhODlyenFt: 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDhjMTkyNzdiZDUzNDk5Zjc0ZGJiYWJjNzBhNjZmZTRlMzM0ZmZmYTNkZTU0OWQyMThmZWU1NWQxZmUyNTEzMRf8FiE=: 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y3NTYwN2YwNDJhZjIxZjNmMDllNDMyNGMyOTRhODlyenFt: 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDhjMTkyNzdiZDUzNDk5Zjc0ZGJiYWJjNzBhNjZmZTRlMzM0ZmZmYTNkZTU0OWQyMThmZWU1NWQxZmUyNTEzMRf8FiE=: ]] 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDhjMTkyNzdiZDUzNDk5Zjc0ZGJiYWJjNzBhNjZmZTRlMzM0ZmZmYTNkZTU0OWQyMThmZWU1NWQxZmUyNTEzMRf8FiE=: 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.770 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.028 nvme0n1 00:28:43.028 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.028 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.028 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.028 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.028 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.028 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.028 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.028 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.028 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.028 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.028 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.028 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.028 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:28:43.028 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.028 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:43.028 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:43.028 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:43.028 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGFlMmIzZmMzODA1ZWNhN2RjM2NiZGY3NTU0YTY0M2RhYzA2MzhhNDc1ZTY1MDcyP0w/ng==: 00:28:43.028 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: 00:28:43.028 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:43.028 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:43.028 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGFlMmIzZmMzODA1ZWNhN2RjM2NiZGY3NTU0YTY0M2RhYzA2MzhhNDc1ZTY1MDcyP0w/ng==: 00:28:43.028 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: ]] 00:28:43.028 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: 00:28:43.028 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:28:43.028 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.028 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:43.028 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:43.028 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:43.029 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.029 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:43.029 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.029 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.029 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.029 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.029 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:43.029 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:43.029 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:43.029 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.029 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.029 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:43.029 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.029 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:43.029 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:43.029 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:43.029 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:43.029 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.029 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.029 nvme0n1 00:28:43.029 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.029 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.029 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.029 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.029 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.029 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.287 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.287 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.287 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.287 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.287 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.287 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.287 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:28:43.287 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.287 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:43.287 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:43.287 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:43.287 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA3NGYzMjVlY2Y1NjZmZGZkMjFjNDYyOWZjNWM2MmHeZ/3p: 00:28:43.287 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGFkNzY0NmEzZGNlNGQ4ZDE3ODUwZGI0ZjNhYWJmOGFuLfZN: 00:28:43.287 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:43.287 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:43.287 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA3NGYzMjVlY2Y1NjZmZGZkMjFjNDYyOWZjNWM2MmHeZ/3p: 00:28:43.287 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGFkNzY0NmEzZGNlNGQ4ZDE3ODUwZGI0ZjNhYWJmOGFuLfZN: ]] 00:28:43.287 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGFkNzY0NmEzZGNlNGQ4ZDE3ODUwZGI0ZjNhYWJmOGFuLfZN: 00:28:43.287 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:28:43.287 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.287 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:43.287 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:43.287 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:43.287 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.287 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:43.287 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.287 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.287 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.287 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.287 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:43.287 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:43.287 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:43.287 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.287 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.287 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:43.287 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.287 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:43.287 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:43.287 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:43.287 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:43.287 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.287 12:27:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.287 nvme0n1 00:28:43.287 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.287 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.287 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.287 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.287 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.287 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.287 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.287 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.287 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.287 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.287 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.287 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.287 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:28:43.287 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.287 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:43.287 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:43.287 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:43.287 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmI4MzRjOTQwZDVhMzcwMWVlZTYxYjFlNTExY2UxODgzNDQ3N2M2ZjY0ZDgxNmExl6cJRg==: 00:28:43.287 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDljMWQ4MWJiOTY0NmQ3ZTE5OGZmZDkzMTQ1MDFhNGMny4oJ: 00:28:43.287 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:43.287 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:43.287 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmI4MzRjOTQwZDVhMzcwMWVlZTYxYjFlNTExY2UxODgzNDQ3N2M2ZjY0ZDgxNmExl6cJRg==: 00:28:43.287 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDljMWQ4MWJiOTY0NmQ3ZTE5OGZmZDkzMTQ1MDFhNGMny4oJ: ]] 00:28:43.287 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDljMWQ4MWJiOTY0NmQ3ZTE5OGZmZDkzMTQ1MDFhNGMny4oJ: 00:28:43.287 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:28:43.287 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.287 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:43.287 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:43.287 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:43.287 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.287 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:43.287 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.287 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.287 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.287 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.287 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:43.287 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:43.287 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:43.287 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.287 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.287 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:43.287 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.287 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:43.287 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:43.287 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:43.287 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:43.287 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.287 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.544 nvme0n1 00:28:43.544 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.544 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.544 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.544 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.544 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.544 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.544 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.544 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.544 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.544 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.544 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.544 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.544 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:43.544 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.544 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:43.544 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:43.544 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:43.544 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWNmZjU2ZTM5ZWZmMzE3ZGNjYmMyYmM2NTBkMWI2NDBjZWZhY2VkYTIzMGYyMWE5MjlkMDI4NzNiMzk2NTlkOHDyzYw=: 00:28:43.544 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:43.544 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:43.544 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:43.544 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWNmZjU2ZTM5ZWZmMzE3ZGNjYmMyYmM2NTBkMWI2NDBjZWZhY2VkYTIzMGYyMWE5MjlkMDI4NzNiMzk2NTlkOHDyzYw=: 00:28:43.544 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:43.545 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:28:43.545 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.545 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:43.545 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:43.545 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:43.545 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.545 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:43.545 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.545 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.545 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.545 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.545 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:43.545 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:43.545 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:43.545 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.545 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.545 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:43.545 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.545 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:43.545 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:43.545 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:43.545 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:43.545 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.545 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.545 nvme0n1 00:28:43.545 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.545 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.545 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.545 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.545 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.545 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.545 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.545 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.545 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.545 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y3NTYwN2YwNDJhZjIxZjNmMDllNDMyNGMyOTRhODlyenFt: 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDhjMTkyNzdiZDUzNDk5Zjc0ZGJiYWJjNzBhNjZmZTRlMzM0ZmZmYTNkZTU0OWQyMThmZWU1NWQxZmUyNTEzMRf8FiE=: 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y3NTYwN2YwNDJhZjIxZjNmMDllNDMyNGMyOTRhODlyenFt: 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDhjMTkyNzdiZDUzNDk5Zjc0ZGJiYWJjNzBhNjZmZTRlMzM0ZmZmYTNkZTU0OWQyMThmZWU1NWQxZmUyNTEzMRf8FiE=: ]] 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDhjMTkyNzdiZDUzNDk5Zjc0ZGJiYWJjNzBhNjZmZTRlMzM0ZmZmYTNkZTU0OWQyMThmZWU1NWQxZmUyNTEzMRf8FiE=: 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.802 nvme0n1 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGFlMmIzZmMzODA1ZWNhN2RjM2NiZGY3NTU0YTY0M2RhYzA2MzhhNDc1ZTY1MDcyP0w/ng==: 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGFlMmIzZmMzODA1ZWNhN2RjM2NiZGY3NTU0YTY0M2RhYzA2MzhhNDc1ZTY1MDcyP0w/ng==: 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: ]] 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:43.802 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.803 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:43.803 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:43.803 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:43.803 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:43.803 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.803 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.060 nvme0n1 00:28:44.060 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.060 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.060 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:44.060 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.060 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.060 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.060 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.060 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:44.060 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.060 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.060 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.060 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:44.060 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:44.060 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:44.060 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:44.060 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:44.060 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:44.060 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA3NGYzMjVlY2Y1NjZmZGZkMjFjNDYyOWZjNWM2MmHeZ/3p: 00:28:44.060 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGFkNzY0NmEzZGNlNGQ4ZDE3ODUwZGI0ZjNhYWJmOGFuLfZN: 00:28:44.060 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:44.060 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:44.060 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA3NGYzMjVlY2Y1NjZmZGZkMjFjNDYyOWZjNWM2MmHeZ/3p: 00:28:44.060 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGFkNzY0NmEzZGNlNGQ4ZDE3ODUwZGI0ZjNhYWJmOGFuLfZN: ]] 00:28:44.060 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGFkNzY0NmEzZGNlNGQ4ZDE3ODUwZGI0ZjNhYWJmOGFuLfZN: 00:28:44.060 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:28:44.060 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:44.060 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:44.060 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:44.060 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:44.060 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:44.060 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:44.060 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.061 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.061 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.061 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:44.061 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:44.061 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:44.061 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:44.061 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.061 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.061 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:44.061 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.061 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:44.061 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:44.061 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:44.061 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:44.061 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.061 12:27:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.318 nvme0n1 00:28:44.318 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.318 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.318 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:44.318 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.318 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.318 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.318 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.318 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:44.318 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.318 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.318 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.318 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:44.318 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:44.318 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:44.318 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:44.318 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:44.318 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:44.318 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmI4MzRjOTQwZDVhMzcwMWVlZTYxYjFlNTExY2UxODgzNDQ3N2M2ZjY0ZDgxNmExl6cJRg==: 00:28:44.318 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDljMWQ4MWJiOTY0NmQ3ZTE5OGZmZDkzMTQ1MDFhNGMny4oJ: 00:28:44.318 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:44.318 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:44.318 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmI4MzRjOTQwZDVhMzcwMWVlZTYxYjFlNTExY2UxODgzNDQ3N2M2ZjY0ZDgxNmExl6cJRg==: 00:28:44.318 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDljMWQ4MWJiOTY0NmQ3ZTE5OGZmZDkzMTQ1MDFhNGMny4oJ: ]] 00:28:44.318 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDljMWQ4MWJiOTY0NmQ3ZTE5OGZmZDkzMTQ1MDFhNGMny4oJ: 00:28:44.318 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:28:44.318 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:44.318 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:44.318 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:44.318 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:44.318 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:44.318 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:44.318 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.318 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.318 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.318 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:44.318 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:44.318 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:44.318 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:44.318 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.318 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.318 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:44.318 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.318 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:44.318 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:44.318 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:44.318 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:44.318 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.318 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.576 nvme0n1 00:28:44.576 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.576 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.576 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.576 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.576 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:44.576 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.576 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.576 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:44.576 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.576 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.576 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.576 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:44.576 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:44.576 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:44.576 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:44.576 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:44.576 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:44.576 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWNmZjU2ZTM5ZWZmMzE3ZGNjYmMyYmM2NTBkMWI2NDBjZWZhY2VkYTIzMGYyMWE5MjlkMDI4NzNiMzk2NTlkOHDyzYw=: 00:28:44.576 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:44.576 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:44.576 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:44.576 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWNmZjU2ZTM5ZWZmMzE3ZGNjYmMyYmM2NTBkMWI2NDBjZWZhY2VkYTIzMGYyMWE5MjlkMDI4NzNiMzk2NTlkOHDyzYw=: 00:28:44.576 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:44.576 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:28:44.576 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:44.576 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:44.576 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:44.576 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:44.576 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:44.576 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:44.576 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.576 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.576 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.576 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:44.576 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:44.576 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:44.576 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:44.576 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.576 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.576 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:44.576 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.576 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:44.576 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:44.576 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:44.576 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:44.576 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.576 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.834 nvme0n1 00:28:44.834 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.834 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:44.834 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.834 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.834 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.834 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.834 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.834 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:44.834 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.834 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.834 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.834 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:44.834 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:44.834 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:44.834 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:44.834 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:44.834 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:44.834 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:44.834 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y3NTYwN2YwNDJhZjIxZjNmMDllNDMyNGMyOTRhODlyenFt: 00:28:44.834 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDhjMTkyNzdiZDUzNDk5Zjc0ZGJiYWJjNzBhNjZmZTRlMzM0ZmZmYTNkZTU0OWQyMThmZWU1NWQxZmUyNTEzMRf8FiE=: 00:28:44.834 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:44.834 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:44.834 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y3NTYwN2YwNDJhZjIxZjNmMDllNDMyNGMyOTRhODlyenFt: 00:28:44.834 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDhjMTkyNzdiZDUzNDk5Zjc0ZGJiYWJjNzBhNjZmZTRlMzM0ZmZmYTNkZTU0OWQyMThmZWU1NWQxZmUyNTEzMRf8FiE=: ]] 00:28:44.835 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDhjMTkyNzdiZDUzNDk5Zjc0ZGJiYWJjNzBhNjZmZTRlMzM0ZmZmYTNkZTU0OWQyMThmZWU1NWQxZmUyNTEzMRf8FiE=: 00:28:44.835 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:28:44.835 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:44.835 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:44.835 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:44.835 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:44.835 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:44.835 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:44.835 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.835 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.835 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.835 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:44.835 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:44.835 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:44.835 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:44.835 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.835 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.835 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:44.835 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.835 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:44.835 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:44.835 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:44.835 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:44.835 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.835 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.093 nvme0n1 00:28:45.093 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.093 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.093 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.093 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.093 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.093 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.093 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.093 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:45.093 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.093 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.093 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.093 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:45.093 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:45.093 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.093 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:45.093 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:45.093 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:45.093 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGFlMmIzZmMzODA1ZWNhN2RjM2NiZGY3NTU0YTY0M2RhYzA2MzhhNDc1ZTY1MDcyP0w/ng==: 00:28:45.093 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: 00:28:45.093 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:45.093 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:45.093 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGFlMmIzZmMzODA1ZWNhN2RjM2NiZGY3NTU0YTY0M2RhYzA2MzhhNDc1ZTY1MDcyP0w/ng==: 00:28:45.093 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: ]] 00:28:45.093 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: 00:28:45.093 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:28:45.093 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:45.093 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:45.093 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:45.093 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:45.093 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:45.093 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:45.093 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.093 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.093 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.093 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:45.093 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:45.093 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:45.093 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:45.093 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.093 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.093 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:45.093 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.093 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:45.093 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:45.093 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:45.093 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:45.093 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.093 12:27:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.351 nvme0n1 00:28:45.351 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.351 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.351 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.351 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.351 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.351 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.351 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.351 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:45.351 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.351 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.351 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.351 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:45.351 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:45.351 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.351 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:45.351 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:45.351 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:45.351 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA3NGYzMjVlY2Y1NjZmZGZkMjFjNDYyOWZjNWM2MmHeZ/3p: 00:28:45.351 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGFkNzY0NmEzZGNlNGQ4ZDE3ODUwZGI0ZjNhYWJmOGFuLfZN: 00:28:45.351 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:45.351 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:45.351 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA3NGYzMjVlY2Y1NjZmZGZkMjFjNDYyOWZjNWM2MmHeZ/3p: 00:28:45.351 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGFkNzY0NmEzZGNlNGQ4ZDE3ODUwZGI0ZjNhYWJmOGFuLfZN: ]] 00:28:45.351 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGFkNzY0NmEzZGNlNGQ4ZDE3ODUwZGI0ZjNhYWJmOGFuLfZN: 00:28:45.351 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:28:45.351 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:45.351 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:45.351 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:45.351 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:45.351 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:45.351 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:45.351 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.351 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.351 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.351 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:45.351 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:45.351 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:45.351 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:45.351 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.351 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.351 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:45.351 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.351 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:45.352 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:45.352 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:45.352 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:45.352 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.352 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.919 nvme0n1 00:28:45.919 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.919 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.919 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.919 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.919 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.919 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.919 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.919 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:45.919 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.919 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.919 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.919 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:45.919 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:45.919 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.919 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:45.919 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:45.919 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:45.919 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmI4MzRjOTQwZDVhMzcwMWVlZTYxYjFlNTExY2UxODgzNDQ3N2M2ZjY0ZDgxNmExl6cJRg==: 00:28:45.919 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDljMWQ4MWJiOTY0NmQ3ZTE5OGZmZDkzMTQ1MDFhNGMny4oJ: 00:28:45.919 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:45.919 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:45.919 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmI4MzRjOTQwZDVhMzcwMWVlZTYxYjFlNTExY2UxODgzNDQ3N2M2ZjY0ZDgxNmExl6cJRg==: 00:28:45.919 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDljMWQ4MWJiOTY0NmQ3ZTE5OGZmZDkzMTQ1MDFhNGMny4oJ: ]] 00:28:45.919 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDljMWQ4MWJiOTY0NmQ3ZTE5OGZmZDkzMTQ1MDFhNGMny4oJ: 00:28:45.919 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:28:45.919 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:45.919 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:45.919 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:45.919 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:45.919 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:45.919 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:45.919 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.919 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.919 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.919 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:45.919 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:45.919 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:45.919 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:45.919 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.919 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.919 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:45.919 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.919 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:45.919 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:45.919 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:45.919 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:45.919 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.920 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.192 nvme0n1 00:28:46.192 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.192 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.192 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.192 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:46.192 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.192 12:27:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.192 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.192 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.192 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.192 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.192 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.192 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:46.192 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:46.192 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.192 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:46.192 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:46.192 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:46.192 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWNmZjU2ZTM5ZWZmMzE3ZGNjYmMyYmM2NTBkMWI2NDBjZWZhY2VkYTIzMGYyMWE5MjlkMDI4NzNiMzk2NTlkOHDyzYw=: 00:28:46.192 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:46.192 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:46.192 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:46.192 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWNmZjU2ZTM5ZWZmMzE3ZGNjYmMyYmM2NTBkMWI2NDBjZWZhY2VkYTIzMGYyMWE5MjlkMDI4NzNiMzk2NTlkOHDyzYw=: 00:28:46.192 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:46.192 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:28:46.192 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.192 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:46.192 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:46.192 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:46.192 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.192 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:46.192 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.192 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.192 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.192 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.192 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:46.192 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:46.192 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:46.192 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.192 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.192 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:46.192 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.192 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:46.192 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:46.192 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:46.192 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:46.192 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.192 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.758 nvme0n1 00:28:46.758 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.758 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.758 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:46.758 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.758 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.759 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.759 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.759 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.759 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.759 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.759 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.759 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:46.759 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:46.759 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:46.759 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.759 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:46.759 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:46.759 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:46.759 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2Y3NTYwN2YwNDJhZjIxZjNmMDllNDMyNGMyOTRhODlyenFt: 00:28:46.759 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NDhjMTkyNzdiZDUzNDk5Zjc0ZGJiYWJjNzBhNjZmZTRlMzM0ZmZmYTNkZTU0OWQyMThmZWU1NWQxZmUyNTEzMRf8FiE=: 00:28:46.759 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:46.759 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:46.759 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2Y3NTYwN2YwNDJhZjIxZjNmMDllNDMyNGMyOTRhODlyenFt: 00:28:46.759 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NDhjMTkyNzdiZDUzNDk5Zjc0ZGJiYWJjNzBhNjZmZTRlMzM0ZmZmYTNkZTU0OWQyMThmZWU1NWQxZmUyNTEzMRf8FiE=: ]] 00:28:46.759 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NDhjMTkyNzdiZDUzNDk5Zjc0ZGJiYWJjNzBhNjZmZTRlMzM0ZmZmYTNkZTU0OWQyMThmZWU1NWQxZmUyNTEzMRf8FiE=: 00:28:46.759 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:28:46.759 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.759 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:46.759 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:46.759 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:46.759 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.759 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:46.759 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.759 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.759 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.759 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.759 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:46.759 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:46.759 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:46.759 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.759 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.759 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:46.759 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.759 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:46.759 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:46.759 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:46.759 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:46.759 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.759 12:27:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.325 nvme0n1 00:28:47.325 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.325 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.325 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.325 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.325 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:47.325 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.325 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:47.325 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:47.325 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.325 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.325 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.325 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:47.325 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:47.325 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:47.325 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:47.325 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:47.325 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:47.326 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGFlMmIzZmMzODA1ZWNhN2RjM2NiZGY3NTU0YTY0M2RhYzA2MzhhNDc1ZTY1MDcyP0w/ng==: 00:28:47.326 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: 00:28:47.326 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:47.326 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:47.326 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGFlMmIzZmMzODA1ZWNhN2RjM2NiZGY3NTU0YTY0M2RhYzA2MzhhNDc1ZTY1MDcyP0w/ng==: 00:28:47.326 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: ]] 00:28:47.326 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: 00:28:47.326 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:28:47.326 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:47.326 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:47.326 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:47.326 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:47.326 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:47.326 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:47.326 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.326 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.326 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.326 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:47.326 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:47.326 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:47.326 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:47.326 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:47.326 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:47.326 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:47.326 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:47.326 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:47.326 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:47.326 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:47.326 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:47.326 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.326 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.891 nvme0n1 00:28:47.891 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.891 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.891 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:47.891 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.891 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.891 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.149 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.149 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:48.149 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.149 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.149 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.149 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:48.149 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:48.149 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:48.149 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:48.149 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:48.149 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:48.149 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA3NGYzMjVlY2Y1NjZmZGZkMjFjNDYyOWZjNWM2MmHeZ/3p: 00:28:48.149 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGFkNzY0NmEzZGNlNGQ4ZDE3ODUwZGI0ZjNhYWJmOGFuLfZN: 00:28:48.149 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:48.149 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:48.149 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA3NGYzMjVlY2Y1NjZmZGZkMjFjNDYyOWZjNWM2MmHeZ/3p: 00:28:48.149 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGFkNzY0NmEzZGNlNGQ4ZDE3ODUwZGI0ZjNhYWJmOGFuLfZN: ]] 00:28:48.149 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGFkNzY0NmEzZGNlNGQ4ZDE3ODUwZGI0ZjNhYWJmOGFuLfZN: 00:28:48.149 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:28:48.149 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:48.149 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:48.149 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:48.149 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:48.149 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:48.149 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:48.149 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.149 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.149 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.149 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:48.149 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:48.149 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:48.149 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:48.149 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:48.149 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:48.149 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:48.149 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:48.149 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:48.149 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:48.149 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:48.149 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:48.149 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.149 12:27:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.716 nvme0n1 00:28:48.716 12:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.716 12:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:48.716 12:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.716 12:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.716 12:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:48.716 12:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.716 12:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.716 12:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:48.716 12:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.716 12:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.716 12:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.716 12:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:48.716 12:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:48.716 12:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:48.716 12:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:48.716 12:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:48.716 12:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:48.716 12:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmI4MzRjOTQwZDVhMzcwMWVlZTYxYjFlNTExY2UxODgzNDQ3N2M2ZjY0ZDgxNmExl6cJRg==: 00:28:48.716 12:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDljMWQ4MWJiOTY0NmQ3ZTE5OGZmZDkzMTQ1MDFhNGMny4oJ: 00:28:48.716 12:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:48.716 12:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:48.716 12:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmI4MzRjOTQwZDVhMzcwMWVlZTYxYjFlNTExY2UxODgzNDQ3N2M2ZjY0ZDgxNmExl6cJRg==: 00:28:48.716 12:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDljMWQ4MWJiOTY0NmQ3ZTE5OGZmZDkzMTQ1MDFhNGMny4oJ: ]] 00:28:48.716 12:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDljMWQ4MWJiOTY0NmQ3ZTE5OGZmZDkzMTQ1MDFhNGMny4oJ: 00:28:48.716 12:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:48.716 12:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:48.716 12:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:48.716 12:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:48.716 12:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:48.716 12:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:48.716 12:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:48.716 12:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.716 12:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.716 12:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.716 12:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:48.716 12:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:48.716 12:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:48.716 12:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:48.716 12:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:48.716 12:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:48.716 12:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:48.716 12:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:48.716 12:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:48.716 12:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:48.716 12:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:48.716 12:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:48.716 12:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.716 12:27:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.283 nvme0n1 00:28:49.283 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.283 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:49.283 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.283 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:49.283 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.283 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.542 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:49.542 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:49.542 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.542 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.542 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.542 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:49.542 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:49.542 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:49.542 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:49.542 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:49.542 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:49.542 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YWNmZjU2ZTM5ZWZmMzE3ZGNjYmMyYmM2NTBkMWI2NDBjZWZhY2VkYTIzMGYyMWE5MjlkMDI4NzNiMzk2NTlkOHDyzYw=: 00:28:49.542 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:49.542 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:49.542 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:49.542 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YWNmZjU2ZTM5ZWZmMzE3ZGNjYmMyYmM2NTBkMWI2NDBjZWZhY2VkYTIzMGYyMWE5MjlkMDI4NzNiMzk2NTlkOHDyzYw=: 00:28:49.542 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:49.542 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:49.542 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:49.542 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:49.542 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:49.542 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:49.542 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:49.542 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:49.542 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.542 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.542 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.542 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:49.542 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:49.542 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:49.542 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:49.542 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:49.542 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:49.542 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:49.542 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:49.542 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:49.542 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:49.542 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:49.542 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:49.542 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.542 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.109 nvme0n1 00:28:50.109 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.109 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:50.109 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:50.109 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.109 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.109 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.109 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:50.109 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:50.109 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.109 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.109 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.109 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:50.109 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:50.109 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:50.109 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:50.109 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:50.109 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGFlMmIzZmMzODA1ZWNhN2RjM2NiZGY3NTU0YTY0M2RhYzA2MzhhNDc1ZTY1MDcyP0w/ng==: 00:28:50.109 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: 00:28:50.109 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:50.109 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:50.109 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGFlMmIzZmMzODA1ZWNhN2RjM2NiZGY3NTU0YTY0M2RhYzA2MzhhNDc1ZTY1MDcyP0w/ng==: 00:28:50.109 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: ]] 00:28:50.109 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: 00:28:50.109 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:50.109 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.109 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.109 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.109 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:50.109 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.110 2024/11/25 12:27:45 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:28:50.110 request: 00:28:50.110 { 00:28:50.110 "method": "bdev_nvme_attach_controller", 00:28:50.110 "params": { 00:28:50.110 "name": "nvme0", 00:28:50.110 "trtype": "tcp", 00:28:50.110 "traddr": "10.0.0.1", 00:28:50.110 "adrfam": "ipv4", 00:28:50.110 "trsvcid": "4420", 00:28:50.110 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:50.110 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:50.110 "prchk_reftag": false, 00:28:50.110 "prchk_guard": false, 00:28:50.110 "hdgst": false, 00:28:50.110 "ddgst": false, 00:28:50.110 "allow_unrecognized_csi": false 00:28:50.110 } 00:28:50.110 } 00:28:50.110 Got JSON-RPC error response 00:28:50.110 GoRPCClient: error on JSON-RPC call 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.110 2024/11/25 12:27:45 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_key:key2 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:28:50.110 request: 00:28:50.110 { 00:28:50.110 "method": "bdev_nvme_attach_controller", 00:28:50.110 "params": { 00:28:50.110 "name": "nvme0", 00:28:50.110 "trtype": "tcp", 00:28:50.110 "traddr": "10.0.0.1", 00:28:50.110 "adrfam": "ipv4", 00:28:50.110 "trsvcid": "4420", 00:28:50.110 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:50.110 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:50.110 "prchk_reftag": false, 00:28:50.110 "prchk_guard": false, 00:28:50.110 "hdgst": false, 00:28:50.110 "ddgst": false, 00:28:50.110 "dhchap_key": "key2", 00:28:50.110 "allow_unrecognized_csi": false 00:28:50.110 } 00:28:50.110 } 00:28:50.110 Got JSON-RPC error response 00:28:50.110 GoRPCClient: error on JSON-RPC call 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:50.110 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.369 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:50.369 12:27:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:50.369 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:50.369 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:50.369 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:50.369 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:50.369 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:50.369 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:50.369 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:50.369 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:50.369 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:50.369 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:50.369 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:50.369 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:50.369 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:50.369 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:50.369 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:50.369 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:50.369 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:50.369 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:50.369 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.369 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.369 2024/11/25 12:27:46 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) dhchap_ctrlr_key:ckey2 dhchap_key:key1 hdgst:%!s(bool=false) hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:28:50.369 request: 00:28:50.369 { 00:28:50.369 "method": "bdev_nvme_attach_controller", 00:28:50.369 "params": { 00:28:50.369 "name": "nvme0", 00:28:50.369 "trtype": "tcp", 00:28:50.369 "traddr": "10.0.0.1", 00:28:50.370 "adrfam": "ipv4", 00:28:50.370 "trsvcid": "4420", 00:28:50.370 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:50.370 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:50.370 "prchk_reftag": false, 00:28:50.370 "prchk_guard": false, 00:28:50.370 "hdgst": false, 00:28:50.370 "ddgst": false, 00:28:50.370 "dhchap_key": "key1", 00:28:50.370 "dhchap_ctrlr_key": "ckey2", 00:28:50.370 "allow_unrecognized_csi": false 00:28:50.370 } 00:28:50.370 } 00:28:50.370 Got JSON-RPC error response 00:28:50.370 GoRPCClient: error on JSON-RPC call 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.370 nvme0n1 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA3NGYzMjVlY2Y1NjZmZGZkMjFjNDYyOWZjNWM2MmHeZ/3p: 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGFkNzY0NmEzZGNlNGQ4ZDE3ODUwZGI0ZjNhYWJmOGFuLfZN: 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA3NGYzMjVlY2Y1NjZmZGZkMjFjNDYyOWZjNWM2MmHeZ/3p: 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGFkNzY0NmEzZGNlNGQ4ZDE3ODUwZGI0ZjNhYWJmOGFuLfZN: ]] 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGFkNzY0NmEzZGNlNGQ4ZDE3ODUwZGI0ZjNhYWJmOGFuLfZN: 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.370 2024/11/25 12:27:46 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:ckey2 dhchap_key:key1 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:28:50.370 request: 00:28:50.370 { 00:28:50.370 "method": "bdev_nvme_set_keys", 00:28:50.370 "params": { 00:28:50.370 "name": "nvme0", 00:28:50.370 "dhchap_key": "key1", 00:28:50.370 "dhchap_ctrlr_key": "ckey2" 00:28:50.370 } 00:28:50.370 } 00:28:50.370 Got JSON-RPC error response 00:28:50.370 GoRPCClient: error on JSON-RPC call 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:50.370 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:50.628 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:50.628 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:50.628 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.628 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.628 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:50.628 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.628 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:50.628 12:27:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:51.561 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:51.561 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.561 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.561 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.561 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.561 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:28:51.561 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:51.561 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.561 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:51.561 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:51.561 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:51.561 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGFlMmIzZmMzODA1ZWNhN2RjM2NiZGY3NTU0YTY0M2RhYzA2MzhhNDc1ZTY1MDcyP0w/ng==: 00:28:51.561 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: 00:28:51.561 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:51.561 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:51.561 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGFlMmIzZmMzODA1ZWNhN2RjM2NiZGY3NTU0YTY0M2RhYzA2MzhhNDc1ZTY1MDcyP0w/ng==: 00:28:51.561 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: ]] 00:28:51.561 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YmVlMjg0M2JhNGIyZjUwN2E2ZjQxMmYyNjQxMzAzNTNlNjhlMTdkNjg5NjU0NDI5Rifouw==: 00:28:51.561 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:28:51.561 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:51.561 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:51.561 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:51.561 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.561 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.561 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:51.561 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.561 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:51.561 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:51.561 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:51.561 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:51.561 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.561 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.561 nvme0n1 00:28:51.561 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.561 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:51.561 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.561 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:51.561 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:51.561 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:51.561 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YjA3NGYzMjVlY2Y1NjZmZGZkMjFjNDYyOWZjNWM2MmHeZ/3p: 00:28:51.561 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZGFkNzY0NmEzZGNlNGQ4ZDE3ODUwZGI0ZjNhYWJmOGFuLfZN: 00:28:51.561 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:51.561 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:51.561 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YjA3NGYzMjVlY2Y1NjZmZGZkMjFjNDYyOWZjNWM2MmHeZ/3p: 00:28:51.562 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZGFkNzY0NmEzZGNlNGQ4ZDE3ODUwZGI0ZjNhYWJmOGFuLfZN: ]] 00:28:51.562 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZGFkNzY0NmEzZGNlNGQ4ZDE3ODUwZGI0ZjNhYWJmOGFuLfZN: 00:28:51.562 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:51.562 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:51.562 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:51.562 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:51.562 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:51.562 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:51.562 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:51.562 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:51.562 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.562 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.820 2024/11/25 12:27:47 error on JSON-RPC call, method: bdev_nvme_set_keys, params: map[dhchap_ctrlr_key:ckey1 dhchap_key:key2 name:nvme0], err: error received for bdev_nvme_set_keys method, err: Code=-13 Msg=Permission denied 00:28:51.820 request: 00:28:51.820 { 00:28:51.820 "method": "bdev_nvme_set_keys", 00:28:51.820 "params": { 00:28:51.820 "name": "nvme0", 00:28:51.820 "dhchap_key": "key2", 00:28:51.820 "dhchap_ctrlr_key": "ckey1" 00:28:51.820 } 00:28:51.820 } 00:28:51.820 Got JSON-RPC error response 00:28:51.820 GoRPCClient: error on JSON-RPC call 00:28:51.820 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:51.820 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:51.820 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:51.820 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:51.820 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:51.820 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:51.820 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.820 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.820 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.820 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.820 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:28:51.820 12:27:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:28:52.753 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:52.753 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:52.753 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.753 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.753 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.753 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:28:52.753 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:28:52.753 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:28:52.753 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:52.753 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:52.753 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:28:52.753 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:52.753 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:28:52.753 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:52.753 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:52.753 rmmod nvme_tcp 00:28:52.753 rmmod nvme_fabrics 00:28:52.753 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:52.753 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:28:52.753 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:28:52.753 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 91465 ']' 00:28:52.753 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 91465 00:28:52.753 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 91465 ']' 00:28:52.753 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 91465 00:28:52.753 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:28:52.753 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:52.753 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91465 00:28:52.753 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:52.753 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:52.753 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91465' 00:28:52.753 killing process with pid 91465 00:28:52.753 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 91465 00:28:52.753 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 91465 00:28:53.011 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:53.011 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:53.011 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:53.011 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:28:53.011 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:28:53.011 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:53.011 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:28:53.011 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:53.011 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:53.011 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:53.011 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:53.011 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:53.011 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:53.011 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:53.011 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:53.011 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:53.011 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:53.011 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:53.011 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:53.011 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:53.011 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:53.011 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:53.011 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:53.011 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:53.011 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:53.011 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:53.268 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:28:53.268 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:53.268 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:53.268 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:53.268 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:53.268 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:28:53.268 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:53.268 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:53.268 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:53.269 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:53.269 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:28:53.269 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:28:53.269 12:27:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:53.832 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:53.832 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:28:53.832 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:28:53.832 12:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.dGa /tmp/spdk.key-null.4gD /tmp/spdk.key-sha256.b9z /tmp/spdk.key-sha384.SPi /tmp/spdk.key-sha512.7XD /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:28:54.090 12:27:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:54.347 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:54.347 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:28:54.347 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:28:54.347 00:28:54.347 real 0m33.502s 00:28:54.347 user 0m29.247s 00:28:54.347 sys 0m2.970s 00:28:54.347 12:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:54.347 ************************************ 00:28:54.347 END TEST nvmf_auth_host 00:28:54.347 12:27:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.347 ************************************ 00:28:54.347 12:27:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:28:54.347 12:27:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:54.347 12:27:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:54.347 12:27:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:54.347 12:27:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.347 ************************************ 00:28:54.347 START TEST nvmf_digest 00:28:54.347 ************************************ 00:28:54.347 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:54.347 * Looking for test storage... 00:28:54.347 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:28:54.347 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:54.347 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:28:54.347 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:54.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:54.606 --rc genhtml_branch_coverage=1 00:28:54.606 --rc genhtml_function_coverage=1 00:28:54.606 --rc genhtml_legend=1 00:28:54.606 --rc geninfo_all_blocks=1 00:28:54.606 --rc geninfo_unexecuted_blocks=1 00:28:54.606 00:28:54.606 ' 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:54.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:54.606 --rc genhtml_branch_coverage=1 00:28:54.606 --rc genhtml_function_coverage=1 00:28:54.606 --rc genhtml_legend=1 00:28:54.606 --rc geninfo_all_blocks=1 00:28:54.606 --rc geninfo_unexecuted_blocks=1 00:28:54.606 00:28:54.606 ' 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:54.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:54.606 --rc genhtml_branch_coverage=1 00:28:54.606 --rc genhtml_function_coverage=1 00:28:54.606 --rc genhtml_legend=1 00:28:54.606 --rc geninfo_all_blocks=1 00:28:54.606 --rc geninfo_unexecuted_blocks=1 00:28:54.606 00:28:54.606 ' 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:54.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:54.606 --rc genhtml_branch_coverage=1 00:28:54.606 --rc genhtml_function_coverage=1 00:28:54.606 --rc genhtml_legend=1 00:28:54.606 --rc geninfo_all_blocks=1 00:28:54.606 --rc geninfo_unexecuted_blocks=1 00:28:54.606 00:28:54.606 ' 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:54.606 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:54.607 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:28:54.607 Cannot find device "nvmf_init_br" 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:28:54.607 Cannot find device "nvmf_init_br2" 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:28:54.607 Cannot find device "nvmf_tgt_br" 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:28:54.607 Cannot find device "nvmf_tgt_br2" 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:28:54.607 Cannot find device "nvmf_init_br" 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:28:54.607 Cannot find device "nvmf_init_br2" 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:28:54.607 Cannot find device "nvmf_tgt_br" 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:28:54.607 Cannot find device "nvmf_tgt_br2" 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:28:54.607 Cannot find device "nvmf_br" 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:28:54.607 Cannot find device "nvmf_init_if" 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:28:54.607 Cannot find device "nvmf_init_if2" 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:54.607 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:54.607 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:54.607 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:28:54.866 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:54.866 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:28:54.866 00:28:54.866 --- 10.0.0.3 ping statistics --- 00:28:54.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:54.866 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:28:54.866 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:28:54.866 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.078 ms 00:28:54.866 00:28:54.866 --- 10.0.0.4 ping statistics --- 00:28:54.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:54.866 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:54.866 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:54.866 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:28:54.866 00:28:54.866 --- 10.0.0.1 ping statistics --- 00:28:54.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:54.866 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:28:54.866 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:54.866 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:28:54.866 00:28:54.866 --- 10.0.0.2 ping statistics --- 00:28:54.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:54.866 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:54.866 ************************************ 00:28:54.866 START TEST nvmf_digest_clean 00:28:54.866 ************************************ 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=93105 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 93105 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 93105 ']' 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:54.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:54.866 12:27:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:54.866 [2024-11-25 12:27:50.709018] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:28:54.866 [2024-11-25 12:27:50.709068] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:55.124 [2024-11-25 12:27:50.851363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:55.124 [2024-11-25 12:27:50.891374] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:55.124 [2024-11-25 12:27:50.891414] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:55.124 [2024-11-25 12:27:50.891420] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:55.124 [2024-11-25 12:27:50.891425] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:55.124 [2024-11-25 12:27:50.891430] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:55.124 [2024-11-25 12:27:50.891700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:56.057 12:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:56.057 12:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:56.057 12:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:56.057 12:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:56.057 12:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:56.057 12:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:56.057 12:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:56.057 12:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:56.057 12:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:56.057 12:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.057 12:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:56.057 null0 00:28:56.057 [2024-11-25 12:27:51.776447] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:56.057 [2024-11-25 12:27:51.800517] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:56.057 12:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.057 12:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:56.057 12:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:56.057 12:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:56.057 12:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:56.057 12:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:56.057 12:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:56.057 12:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:56.057 12:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93155 00:28:56.057 12:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93155 /var/tmp/bperf.sock 00:28:56.057 12:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 93155 ']' 00:28:56.057 12:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:56.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:56.057 12:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:56.057 12:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:56.057 12:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:56.057 12:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:56.057 12:27:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:56.057 [2024-11-25 12:27:51.842839] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:28:56.057 [2024-11-25 12:27:51.842895] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93155 ] 00:28:56.315 [2024-11-25 12:27:51.979193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.315 [2024-11-25 12:27:52.021043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:56.890 12:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:56.890 12:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:28:56.890 12:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:56.891 12:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:56.891 12:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:57.170 12:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:57.170 12:27:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:57.427 nvme0n1 00:28:57.427 12:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:57.427 12:27:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:57.684 Running I/O for 2 seconds... 00:28:59.551 22632.00 IOPS, 88.41 MiB/s [2024-11-25T12:27:55.427Z] 23641.50 IOPS, 92.35 MiB/s 00:28:59.551 Latency(us) 00:28:59.551 [2024-11-25T12:27:55.427Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.551 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:59.551 nvme0n1 : 2.00 23657.74 92.41 0.00 0.00 5405.63 2886.10 10939.47 00:28:59.551 [2024-11-25T12:27:55.427Z] =================================================================================================================== 00:28:59.551 [2024-11-25T12:27:55.427Z] Total : 23657.74 92.41 0.00 0.00 5405.63 2886.10 10939.47 00:28:59.551 { 00:28:59.551 "results": [ 00:28:59.551 { 00:28:59.551 "job": "nvme0n1", 00:28:59.551 "core_mask": "0x2", 00:28:59.551 "workload": "randread", 00:28:59.551 "status": "finished", 00:28:59.551 "queue_depth": 128, 00:28:59.551 "io_size": 4096, 00:28:59.551 "runtime": 2.004038, 00:28:59.551 "iops": 23657.735032968438, 00:28:59.551 "mibps": 92.41302747253296, 00:28:59.551 "io_failed": 0, 00:28:59.551 "io_timeout": 0, 00:28:59.551 "avg_latency_us": 5405.631575405253, 00:28:59.551 "min_latency_us": 2886.1046153846155, 00:28:59.551 "max_latency_us": 10939.47076923077 00:28:59.551 } 00:28:59.551 ], 00:28:59.551 "core_count": 1 00:28:59.551 } 00:28:59.551 12:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:59.551 12:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:59.551 12:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:59.551 | select(.opcode=="crc32c") 00:28:59.551 | "\(.module_name) \(.executed)"' 00:28:59.551 12:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:59.551 12:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:59.809 12:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:59.809 12:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:59.809 12:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:59.809 12:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:59.809 12:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93155 00:28:59.809 12:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 93155 ']' 00:28:59.809 12:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 93155 00:28:59.809 12:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:28:59.809 12:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:59.809 12:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93155 00:28:59.809 12:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:59.809 killing process with pid 93155 00:28:59.809 12:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:59.809 12:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93155' 00:28:59.809 Received shutdown signal, test time was about 2.000000 seconds 00:28:59.809 00:28:59.809 Latency(us) 00:28:59.809 [2024-11-25T12:27:55.685Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.809 [2024-11-25T12:27:55.685Z] =================================================================================================================== 00:28:59.809 [2024-11-25T12:27:55.685Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:59.809 12:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 93155 00:28:59.809 12:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 93155 00:29:00.066 12:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:29:00.066 12:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:00.066 12:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:00.066 12:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:00.066 12:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:00.066 12:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:00.066 12:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:00.066 12:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93244 00:29:00.066 12:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:00.066 12:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93244 /var/tmp/bperf.sock 00:29:00.066 12:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 93244 ']' 00:29:00.066 12:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:00.066 12:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:00.066 12:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:00.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:00.066 12:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:00.066 12:27:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:00.066 [2024-11-25 12:27:55.786626] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:29:00.066 [2024-11-25 12:27:55.787051] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93244 ] 00:29:00.066 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:00.066 Zero copy mechanism will not be used. 00:29:00.066 [2024-11-25 12:27:55.919523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.323 [2024-11-25 12:27:55.955332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:00.888 12:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:00.888 12:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:00.888 12:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:00.888 12:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:00.888 12:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:01.146 12:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:01.146 12:27:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:01.403 nvme0n1 00:29:01.403 12:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:01.403 12:27:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:01.661 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:01.661 Zero copy mechanism will not be used. 00:29:01.661 Running I/O for 2 seconds... 00:29:03.525 11046.00 IOPS, 1380.75 MiB/s [2024-11-25T12:27:59.401Z] 11050.50 IOPS, 1381.31 MiB/s 00:29:03.525 Latency(us) 00:29:03.525 [2024-11-25T12:27:59.401Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:03.525 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:03.525 nvme0n1 : 2.00 11046.74 1380.84 0.00 0.00 1445.96 428.50 2684.46 00:29:03.525 [2024-11-25T12:27:59.401Z] =================================================================================================================== 00:29:03.525 [2024-11-25T12:27:59.401Z] Total : 11046.74 1380.84 0.00 0.00 1445.96 428.50 2684.46 00:29:03.525 { 00:29:03.525 "results": [ 00:29:03.525 { 00:29:03.525 "job": "nvme0n1", 00:29:03.525 "core_mask": "0x2", 00:29:03.525 "workload": "randread", 00:29:03.525 "status": "finished", 00:29:03.525 "queue_depth": 16, 00:29:03.525 "io_size": 131072, 00:29:03.525 "runtime": 2.002219, 00:29:03.525 "iops": 11046.743637933712, 00:29:03.525 "mibps": 1380.842954741714, 00:29:03.525 "io_failed": 0, 00:29:03.525 "io_timeout": 0, 00:29:03.526 "avg_latency_us": 1445.961975696787, 00:29:03.526 "min_latency_us": 428.50461538461536, 00:29:03.526 "max_latency_us": 2684.4553846153844 00:29:03.526 } 00:29:03.526 ], 00:29:03.526 "core_count": 1 00:29:03.526 } 00:29:03.526 12:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:03.526 12:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:03.526 12:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:03.526 12:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:03.526 | select(.opcode=="crc32c") 00:29:03.526 | "\(.module_name) \(.executed)"' 00:29:03.526 12:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:03.785 12:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:03.785 12:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:03.785 12:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:03.785 12:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:03.785 12:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93244 00:29:03.785 12:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 93244 ']' 00:29:03.785 12:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 93244 00:29:03.785 12:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:03.785 12:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:03.785 12:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93244 00:29:03.785 12:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:03.785 12:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:03.785 killing process with pid 93244 00:29:03.785 12:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93244' 00:29:03.785 12:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 93244 00:29:03.785 Received shutdown signal, test time was about 2.000000 seconds 00:29:03.785 00:29:03.785 Latency(us) 00:29:03.785 [2024-11-25T12:27:59.661Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:03.785 [2024-11-25T12:27:59.661Z] =================================================================================================================== 00:29:03.785 [2024-11-25T12:27:59.661Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:03.785 12:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 93244 00:29:04.043 12:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:29:04.043 12:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:04.043 12:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:04.043 12:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:04.043 12:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:04.043 12:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:04.043 12:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:04.043 12:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:04.043 12:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93331 00:29:04.043 12:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93331 /var/tmp/bperf.sock 00:29:04.043 12:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 93331 ']' 00:29:04.043 12:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:04.043 12:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:04.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:04.043 12:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:04.043 12:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:04.044 12:27:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:04.044 [2024-11-25 12:27:59.707851] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:29:04.044 [2024-11-25 12:27:59.708322] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93331 ] 00:29:04.044 [2024-11-25 12:27:59.837410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.044 [2024-11-25 12:27:59.873036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:04.976 12:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:04.976 12:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:04.977 12:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:04.977 12:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:04.977 12:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:04.977 12:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:04.977 12:28:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:05.235 nvme0n1 00:29:05.492 12:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:05.492 12:28:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:05.492 Running I/O for 2 seconds... 00:29:07.355 30135.00 IOPS, 117.71 MiB/s [2024-11-25T12:28:03.231Z] 30263.50 IOPS, 118.22 MiB/s 00:29:07.355 Latency(us) 00:29:07.355 [2024-11-25T12:28:03.231Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.355 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:07.355 nvme0n1 : 2.00 30261.30 118.21 0.00 0.00 4224.27 1953.48 6604.01 00:29:07.355 [2024-11-25T12:28:03.231Z] =================================================================================================================== 00:29:07.355 [2024-11-25T12:28:03.231Z] Total : 30261.30 118.21 0.00 0.00 4224.27 1953.48 6604.01 00:29:07.355 { 00:29:07.355 "results": [ 00:29:07.355 { 00:29:07.355 "job": "nvme0n1", 00:29:07.355 "core_mask": "0x2", 00:29:07.355 "workload": "randwrite", 00:29:07.355 "status": "finished", 00:29:07.355 "queue_depth": 128, 00:29:07.355 "io_size": 4096, 00:29:07.355 "runtime": 2.004375, 00:29:07.355 "iops": 30261.303398815093, 00:29:07.355 "mibps": 118.20821640162146, 00:29:07.355 "io_failed": 0, 00:29:07.355 "io_timeout": 0, 00:29:07.355 "avg_latency_us": 4224.267656950089, 00:29:07.355 "min_latency_us": 1953.4769230769232, 00:29:07.355 "max_latency_us": 6604.012307692307 00:29:07.355 } 00:29:07.355 ], 00:29:07.355 "core_count": 1 00:29:07.355 } 00:29:07.355 12:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:07.355 12:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:07.613 12:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:07.613 12:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:07.613 | select(.opcode=="crc32c") 00:29:07.613 | "\(.module_name) \(.executed)"' 00:29:07.613 12:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:07.613 12:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:07.613 12:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:07.613 12:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:07.613 12:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:07.613 12:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93331 00:29:07.613 12:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 93331 ']' 00:29:07.613 12:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 93331 00:29:07.613 12:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:07.613 12:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:07.613 12:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93331 00:29:07.613 12:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:07.613 12:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:07.613 killing process with pid 93331 00:29:07.613 12:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93331' 00:29:07.613 12:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 93331 00:29:07.613 Received shutdown signal, test time was about 2.000000 seconds 00:29:07.613 00:29:07.613 Latency(us) 00:29:07.613 [2024-11-25T12:28:03.489Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.613 [2024-11-25T12:28:03.489Z] =================================================================================================================== 00:29:07.613 [2024-11-25T12:28:03.489Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:07.613 12:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 93331 00:29:07.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:07.870 12:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:29:07.870 12:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:07.870 12:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:07.870 12:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:07.870 12:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:07.870 12:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:07.870 12:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:07.870 12:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=93415 00:29:07.870 12:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 93415 /var/tmp/bperf.sock 00:29:07.870 12:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 93415 ']' 00:29:07.870 12:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:07.870 12:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:07.870 12:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:07.870 12:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:07.870 12:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:07.870 12:28:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:07.870 [2024-11-25 12:28:03.627080] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:29:07.870 [2024-11-25 12:28:03.627489] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93415 ] 00:29:07.870 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:07.870 Zero copy mechanism will not be used. 00:29:08.127 [2024-11-25 12:28:03.759666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:08.127 [2024-11-25 12:28:03.797010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:08.735 12:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:08.735 12:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:08.735 12:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:08.735 12:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:08.735 12:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:08.993 12:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:08.993 12:28:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:09.251 nvme0n1 00:29:09.251 12:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:09.251 12:28:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:09.509 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:09.509 Zero copy mechanism will not be used. 00:29:09.509 Running I/O for 2 seconds... 00:29:11.392 11616.00 IOPS, 1452.00 MiB/s [2024-11-25T12:28:07.268Z] 11656.00 IOPS, 1457.00 MiB/s 00:29:11.392 Latency(us) 00:29:11.392 [2024-11-25T12:28:07.268Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:11.392 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:11.392 nvme0n1 : 2.00 11651.25 1456.41 0.00 0.00 1370.44 1254.01 6654.42 00:29:11.392 [2024-11-25T12:28:07.268Z] =================================================================================================================== 00:29:11.392 [2024-11-25T12:28:07.268Z] Total : 11651.25 1456.41 0.00 0.00 1370.44 1254.01 6654.42 00:29:11.392 { 00:29:11.392 "results": [ 00:29:11.392 { 00:29:11.392 "job": "nvme0n1", 00:29:11.392 "core_mask": "0x2", 00:29:11.392 "workload": "randwrite", 00:29:11.392 "status": "finished", 00:29:11.392 "queue_depth": 16, 00:29:11.392 "io_size": 131072, 00:29:11.392 "runtime": 2.002188, 00:29:11.392 "iops": 11651.253528639669, 00:29:11.392 "mibps": 1456.4066910799586, 00:29:11.392 "io_failed": 0, 00:29:11.392 "io_timeout": 0, 00:29:11.392 "avg_latency_us": 1370.4368555450035, 00:29:11.392 "min_latency_us": 1254.006153846154, 00:29:11.392 "max_latency_us": 6654.424615384615 00:29:11.392 } 00:29:11.392 ], 00:29:11.392 "core_count": 1 00:29:11.392 } 00:29:11.392 12:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:11.392 12:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:11.392 12:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:11.392 | select(.opcode=="crc32c") 00:29:11.392 | "\(.module_name) \(.executed)"' 00:29:11.392 12:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:11.392 12:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:11.650 12:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:11.650 12:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:11.650 12:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:11.650 12:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:11.650 12:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 93415 00:29:11.650 12:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 93415 ']' 00:29:11.650 12:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 93415 00:29:11.650 12:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:11.650 12:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:11.650 12:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93415 00:29:11.650 killing process with pid 93415 00:29:11.650 Received shutdown signal, test time was about 2.000000 seconds 00:29:11.650 00:29:11.650 Latency(us) 00:29:11.650 [2024-11-25T12:28:07.526Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:11.650 [2024-11-25T12:28:07.526Z] =================================================================================================================== 00:29:11.650 [2024-11-25T12:28:07.526Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:11.650 12:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:11.650 12:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:11.650 12:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93415' 00:29:11.650 12:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 93415 00:29:11.650 12:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 93415 00:29:11.909 12:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 93105 00:29:11.909 12:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 93105 ']' 00:29:11.909 12:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 93105 00:29:11.909 12:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:11.909 12:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:11.909 12:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93105 00:29:11.909 killing process with pid 93105 00:29:11.909 12:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:11.909 12:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:11.909 12:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93105' 00:29:11.909 12:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 93105 00:29:11.909 12:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 93105 00:29:11.909 ************************************ 00:29:11.909 END TEST nvmf_digest_clean 00:29:11.909 ************************************ 00:29:11.909 00:29:11.909 real 0m17.057s 00:29:11.909 user 0m33.130s 00:29:11.909 sys 0m3.456s 00:29:11.909 12:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:11.909 12:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:11.909 12:28:07 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:29:11.909 12:28:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:11.909 12:28:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:11.909 12:28:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:11.909 ************************************ 00:29:11.909 START TEST nvmf_digest_error 00:29:11.909 ************************************ 00:29:11.909 12:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:29:11.909 12:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:29:11.909 12:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:11.909 12:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:11.909 12:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:11.909 12:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=93524 00:29:11.909 12:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 93524 00:29:11.909 12:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 93524 ']' 00:29:11.909 12:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:11.909 12:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:11.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:11.909 12:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:11.909 12:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:11.909 12:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:11.909 12:28:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:12.167 [2024-11-25 12:28:07.810270] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:29:12.167 [2024-11-25 12:28:07.810327] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:12.167 [2024-11-25 12:28:07.944092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.167 [2024-11-25 12:28:07.986283] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:12.167 [2024-11-25 12:28:07.986320] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:12.167 [2024-11-25 12:28:07.986327] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:12.167 [2024-11-25 12:28:07.986332] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:12.167 [2024-11-25 12:28:07.986337] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:12.167 [2024-11-25 12:28:07.986618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:13.099 12:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:13.099 12:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:13.099 12:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:13.099 12:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:13.099 12:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:13.100 12:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:13.100 12:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:13.100 12:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.100 12:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:13.100 [2024-11-25 12:28:08.730982] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:13.100 12:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.100 12:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:29:13.100 12:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:29:13.100 12:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.100 12:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:13.100 null0 00:29:13.100 [2024-11-25 12:28:08.823156] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:13.100 [2024-11-25 12:28:08.847253] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:29:13.100 12:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.100 12:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:29:13.100 12:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:13.100 12:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:13.100 12:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:13.100 12:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:13.100 12:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93574 00:29:13.100 12:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93574 /var/tmp/bperf.sock 00:29:13.100 12:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 93574 ']' 00:29:13.100 12:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:13.100 12:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:13.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:13.100 12:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:13.100 12:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:13.100 12:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:13.100 12:28:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:13.100 [2024-11-25 12:28:08.890953] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:29:13.100 [2024-11-25 12:28:08.891008] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93574 ] 00:29:13.357 [2024-11-25 12:28:09.029528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.357 [2024-11-25 12:28:09.071621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:13.921 12:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:13.921 12:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:13.921 12:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:13.921 12:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:14.179 12:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:14.179 12:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.179 12:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:14.179 12:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.179 12:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:14.179 12:28:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:14.437 nvme0n1 00:29:14.437 12:28:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:14.437 12:28:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.437 12:28:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:14.437 12:28:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.437 12:28:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:14.437 12:28:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:14.695 Running I/O for 2 seconds... 00:29:14.695 [2024-11-25 12:28:10.389297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:14.695 [2024-11-25 12:28:10.389343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.695 [2024-11-25 12:28:10.389353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.695 [2024-11-25 12:28:10.400679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:14.695 [2024-11-25 12:28:10.400711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.695 [2024-11-25 12:28:10.400720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.695 [2024-11-25 12:28:10.411928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:14.695 [2024-11-25 12:28:10.411956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.695 [2024-11-25 12:28:10.411964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.695 [2024-11-25 12:28:10.423136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:14.695 [2024-11-25 12:28:10.423165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:8840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.695 [2024-11-25 12:28:10.423173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.695 [2024-11-25 12:28:10.434274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:14.695 [2024-11-25 12:28:10.434301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.695 [2024-11-25 12:28:10.434309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.695 [2024-11-25 12:28:10.445496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:14.695 [2024-11-25 12:28:10.445525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.695 [2024-11-25 12:28:10.445532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.695 [2024-11-25 12:28:10.456830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:14.695 [2024-11-25 12:28:10.456858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.695 [2024-11-25 12:28:10.456866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.695 [2024-11-25 12:28:10.468140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:14.695 [2024-11-25 12:28:10.468169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.695 [2024-11-25 12:28:10.468176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.695 [2024-11-25 12:28:10.479641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:14.695 [2024-11-25 12:28:10.479670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.695 [2024-11-25 12:28:10.479677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.695 [2024-11-25 12:28:10.491062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:14.695 [2024-11-25 12:28:10.491090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.695 [2024-11-25 12:28:10.491097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.695 [2024-11-25 12:28:10.502463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:14.695 [2024-11-25 12:28:10.502494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.695 [2024-11-25 12:28:10.502503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.695 [2024-11-25 12:28:10.515266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:14.695 [2024-11-25 12:28:10.515295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.695 [2024-11-25 12:28:10.515303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.695 [2024-11-25 12:28:10.525925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:14.695 [2024-11-25 12:28:10.525973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.695 [2024-11-25 12:28:10.525986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.695 [2024-11-25 12:28:10.538089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:14.695 [2024-11-25 12:28:10.538118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.695 [2024-11-25 12:28:10.538125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.695 [2024-11-25 12:28:10.549262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:14.695 [2024-11-25 12:28:10.549291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:2697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.695 [2024-11-25 12:28:10.549298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.695 [2024-11-25 12:28:10.561072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:14.695 [2024-11-25 12:28:10.561099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.695 [2024-11-25 12:28:10.561106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.954 [2024-11-25 12:28:10.571903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:14.954 [2024-11-25 12:28:10.571931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.954 [2024-11-25 12:28:10.571938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.954 [2024-11-25 12:28:10.583746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:14.954 [2024-11-25 12:28:10.583786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.954 [2024-11-25 12:28:10.583793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.954 [2024-11-25 12:28:10.594972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:14.954 [2024-11-25 12:28:10.594999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.954 [2024-11-25 12:28:10.595006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.954 [2024-11-25 12:28:10.606588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:14.954 [2024-11-25 12:28:10.606617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.954 [2024-11-25 12:28:10.606625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.954 [2024-11-25 12:28:10.617888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:14.954 [2024-11-25 12:28:10.617916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.954 [2024-11-25 12:28:10.617924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.954 [2024-11-25 12:28:10.629174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:14.954 [2024-11-25 12:28:10.629201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.954 [2024-11-25 12:28:10.629209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.954 [2024-11-25 12:28:10.640908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:14.954 [2024-11-25 12:28:10.640935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.954 [2024-11-25 12:28:10.640943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.954 [2024-11-25 12:28:10.651931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:14.954 [2024-11-25 12:28:10.651957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.954 [2024-11-25 12:28:10.651965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.954 [2024-11-25 12:28:10.663393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:14.954 [2024-11-25 12:28:10.663420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:10061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.954 [2024-11-25 12:28:10.663427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.954 [2024-11-25 12:28:10.674494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:14.954 [2024-11-25 12:28:10.674521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.954 [2024-11-25 12:28:10.674529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.954 [2024-11-25 12:28:10.685809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:14.954 [2024-11-25 12:28:10.685836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.954 [2024-11-25 12:28:10.685843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.954 [2024-11-25 12:28:10.696959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:14.954 [2024-11-25 12:28:10.696986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.954 [2024-11-25 12:28:10.696993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.954 [2024-11-25 12:28:10.708147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:14.954 [2024-11-25 12:28:10.708174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.954 [2024-11-25 12:28:10.708181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.954 [2024-11-25 12:28:10.719343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:14.954 [2024-11-25 12:28:10.719370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.954 [2024-11-25 12:28:10.719378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.954 [2024-11-25 12:28:10.730618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:14.954 [2024-11-25 12:28:10.730645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.954 [2024-11-25 12:28:10.730652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.954 [2024-11-25 12:28:10.742033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:14.954 [2024-11-25 12:28:10.742060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.954 [2024-11-25 12:28:10.742067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.954 [2024-11-25 12:28:10.752910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:14.954 [2024-11-25 12:28:10.752937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.954 [2024-11-25 12:28:10.752944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.954 [2024-11-25 12:28:10.763701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:14.954 [2024-11-25 12:28:10.763729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.954 [2024-11-25 12:28:10.763736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.954 [2024-11-25 12:28:10.775657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:14.954 [2024-11-25 12:28:10.775684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:17347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.954 [2024-11-25 12:28:10.775691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.954 [2024-11-25 12:28:10.786739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:14.954 [2024-11-25 12:28:10.786774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.954 [2024-11-25 12:28:10.786781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.954 [2024-11-25 12:28:10.797541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:14.954 [2024-11-25 12:28:10.797570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.954 [2024-11-25 12:28:10.797577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.954 [2024-11-25 12:28:10.808646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:14.954 [2024-11-25 12:28:10.808673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.954 [2024-11-25 12:28:10.808680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.955 [2024-11-25 12:28:10.820253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:14.955 [2024-11-25 12:28:10.820281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.955 [2024-11-25 12:28:10.820288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.212 [2024-11-25 12:28:10.831457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.212 [2024-11-25 12:28:10.831484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:14255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.212 [2024-11-25 12:28:10.831492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.212 [2024-11-25 12:28:10.842902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.212 [2024-11-25 12:28:10.842928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.212 [2024-11-25 12:28:10.842935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.212 [2024-11-25 12:28:10.854211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.212 [2024-11-25 12:28:10.854238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.212 [2024-11-25 12:28:10.854246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.212 [2024-11-25 12:28:10.865221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.212 [2024-11-25 12:28:10.865252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.213 [2024-11-25 12:28:10.865259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.213 [2024-11-25 12:28:10.876667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.213 [2024-11-25 12:28:10.876695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.213 [2024-11-25 12:28:10.876703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.213 [2024-11-25 12:28:10.888262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.213 [2024-11-25 12:28:10.888291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.213 [2024-11-25 12:28:10.888298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.213 [2024-11-25 12:28:10.899650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.213 [2024-11-25 12:28:10.899679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.213 [2024-11-25 12:28:10.899686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.213 [2024-11-25 12:28:10.912682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.213 [2024-11-25 12:28:10.912712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.213 [2024-11-25 12:28:10.912719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.213 [2024-11-25 12:28:10.923776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.213 [2024-11-25 12:28:10.923805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.213 [2024-11-25 12:28:10.923813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.213 [2024-11-25 12:28:10.935130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.213 [2024-11-25 12:28:10.935158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.213 [2024-11-25 12:28:10.935166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.213 [2024-11-25 12:28:10.946654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.213 [2024-11-25 12:28:10.946684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:19076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.213 [2024-11-25 12:28:10.946691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.213 [2024-11-25 12:28:10.957781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.213 [2024-11-25 12:28:10.957809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.213 [2024-11-25 12:28:10.957816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.213 [2024-11-25 12:28:10.968968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.213 [2024-11-25 12:28:10.968994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.213 [2024-11-25 12:28:10.969002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.213 [2024-11-25 12:28:10.980286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.213 [2024-11-25 12:28:10.980314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.213 [2024-11-25 12:28:10.980322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.213 [2024-11-25 12:28:10.991466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.213 [2024-11-25 12:28:10.991492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.213 [2024-11-25 12:28:10.991500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.213 [2024-11-25 12:28:11.002925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.213 [2024-11-25 12:28:11.002952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.213 [2024-11-25 12:28:11.002959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.213 [2024-11-25 12:28:11.013917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.213 [2024-11-25 12:28:11.013943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.213 [2024-11-25 12:28:11.013951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.213 [2024-11-25 12:28:11.025291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.213 [2024-11-25 12:28:11.025318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.213 [2024-11-25 12:28:11.025325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.213 [2024-11-25 12:28:11.036490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.213 [2024-11-25 12:28:11.036517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.213 [2024-11-25 12:28:11.036525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.213 [2024-11-25 12:28:11.047988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.213 [2024-11-25 12:28:11.048015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.213 [2024-11-25 12:28:11.048023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.213 [2024-11-25 12:28:11.058990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.213 [2024-11-25 12:28:11.059017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.213 [2024-11-25 12:28:11.059024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.213 [2024-11-25 12:28:11.070174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.213 [2024-11-25 12:28:11.070201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.213 [2024-11-25 12:28:11.070208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.213 [2024-11-25 12:28:11.081473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.213 [2024-11-25 12:28:11.081499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.213 [2024-11-25 12:28:11.081507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.471 [2024-11-25 12:28:11.093025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.471 [2024-11-25 12:28:11.093052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.471 [2024-11-25 12:28:11.093060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.471 [2024-11-25 12:28:11.104608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.471 [2024-11-25 12:28:11.104635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.471 [2024-11-25 12:28:11.104643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.471 [2024-11-25 12:28:11.115910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.471 [2024-11-25 12:28:11.115937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.471 [2024-11-25 12:28:11.115944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.471 [2024-11-25 12:28:11.127009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.471 [2024-11-25 12:28:11.127035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:25099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.471 [2024-11-25 12:28:11.127043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.471 [2024-11-25 12:28:11.137768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.471 [2024-11-25 12:28:11.137794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:25020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.471 [2024-11-25 12:28:11.137801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.471 [2024-11-25 12:28:11.148873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.471 [2024-11-25 12:28:11.148900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.471 [2024-11-25 12:28:11.148907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.471 [2024-11-25 12:28:11.160284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.471 [2024-11-25 12:28:11.160312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.471 [2024-11-25 12:28:11.160319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.471 [2024-11-25 12:28:11.171074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.471 [2024-11-25 12:28:11.171100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.471 [2024-11-25 12:28:11.171107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.471 [2024-11-25 12:28:11.183227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.471 [2024-11-25 12:28:11.183256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.471 [2024-11-25 12:28:11.183263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.471 [2024-11-25 12:28:11.194352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.471 [2024-11-25 12:28:11.194379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.471 [2024-11-25 12:28:11.194386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.471 [2024-11-25 12:28:11.205625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.471 [2024-11-25 12:28:11.205653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.471 [2024-11-25 12:28:11.205660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.471 [2024-11-25 12:28:11.216689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.471 [2024-11-25 12:28:11.216716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.471 [2024-11-25 12:28:11.216723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.471 [2024-11-25 12:28:11.228353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.471 [2024-11-25 12:28:11.228380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.471 [2024-11-25 12:28:11.228387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.471 [2024-11-25 12:28:11.239801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.471 [2024-11-25 12:28:11.239827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.471 [2024-11-25 12:28:11.239834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.471 [2024-11-25 12:28:11.251348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.471 [2024-11-25 12:28:11.251375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.471 [2024-11-25 12:28:11.251382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.471 [2024-11-25 12:28:11.264400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.471 [2024-11-25 12:28:11.264427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:1809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.471 [2024-11-25 12:28:11.264434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.471 [2024-11-25 12:28:11.275926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.471 [2024-11-25 12:28:11.275954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.471 [2024-11-25 12:28:11.275961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.471 [2024-11-25 12:28:11.287317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.471 [2024-11-25 12:28:11.287344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.471 [2024-11-25 12:28:11.287351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.471 [2024-11-25 12:28:11.298895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.471 [2024-11-25 12:28:11.298923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.471 [2024-11-25 12:28:11.298930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.471 [2024-11-25 12:28:11.310171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.471 [2024-11-25 12:28:11.310196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.471 [2024-11-25 12:28:11.310204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.471 [2024-11-25 12:28:11.321201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.471 [2024-11-25 12:28:11.321229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.471 [2024-11-25 12:28:11.321236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.471 [2024-11-25 12:28:11.332832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.471 [2024-11-25 12:28:11.332859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.471 [2024-11-25 12:28:11.332866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.471 [2024-11-25 12:28:11.344320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.472 [2024-11-25 12:28:11.344347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.472 [2024-11-25 12:28:11.344354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.729 [2024-11-25 12:28:11.355621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.729 [2024-11-25 12:28:11.355648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:25553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.729 [2024-11-25 12:28:11.355655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.729 [2024-11-25 12:28:11.367148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.729 [2024-11-25 12:28:11.367176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.729 [2024-11-25 12:28:11.367184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.729 22200.00 IOPS, 86.72 MiB/s [2024-11-25T12:28:11.605Z] [2024-11-25 12:28:11.380180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.729 [2024-11-25 12:28:11.380208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.729 [2024-11-25 12:28:11.380215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.729 [2024-11-25 12:28:11.391308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.729 [2024-11-25 12:28:11.391336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.729 [2024-11-25 12:28:11.391343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.729 [2024-11-25 12:28:11.402612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.729 [2024-11-25 12:28:11.402638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.729 [2024-11-25 12:28:11.402646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.729 [2024-11-25 12:28:11.413901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.729 [2024-11-25 12:28:11.413926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.729 [2024-11-25 12:28:11.413933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.729 [2024-11-25 12:28:11.425090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.729 [2024-11-25 12:28:11.425117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:25469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.729 [2024-11-25 12:28:11.425125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.729 [2024-11-25 12:28:11.436928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.729 [2024-11-25 12:28:11.436954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.729 [2024-11-25 12:28:11.436961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.729 [2024-11-25 12:28:11.448409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.729 [2024-11-25 12:28:11.448436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.729 [2024-11-25 12:28:11.448443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.729 [2024-11-25 12:28:11.459862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.729 [2024-11-25 12:28:11.459890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.729 [2024-11-25 12:28:11.459897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.729 [2024-11-25 12:28:11.471473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.729 [2024-11-25 12:28:11.471500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.729 [2024-11-25 12:28:11.471508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.729 [2024-11-25 12:28:11.482765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.729 [2024-11-25 12:28:11.482791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.729 [2024-11-25 12:28:11.482798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.729 [2024-11-25 12:28:11.493575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.729 [2024-11-25 12:28:11.493603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.729 [2024-11-25 12:28:11.493610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.729 [2024-11-25 12:28:11.505064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.729 [2024-11-25 12:28:11.505090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.729 [2024-11-25 12:28:11.505097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.729 [2024-11-25 12:28:11.516780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.730 [2024-11-25 12:28:11.516806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.730 [2024-11-25 12:28:11.516813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.730 [2024-11-25 12:28:11.528320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.730 [2024-11-25 12:28:11.528347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.730 [2024-11-25 12:28:11.528355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.730 [2024-11-25 12:28:11.539582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.730 [2024-11-25 12:28:11.539610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.730 [2024-11-25 12:28:11.539618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.730 [2024-11-25 12:28:11.551034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.730 [2024-11-25 12:28:11.551062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.730 [2024-11-25 12:28:11.551070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.730 [2024-11-25 12:28:11.562326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.730 [2024-11-25 12:28:11.562353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.730 [2024-11-25 12:28:11.562360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.730 [2024-11-25 12:28:11.573720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.730 [2024-11-25 12:28:11.573759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.730 [2024-11-25 12:28:11.573767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.730 [2024-11-25 12:28:11.584805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.730 [2024-11-25 12:28:11.584833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.730 [2024-11-25 12:28:11.584840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.730 [2024-11-25 12:28:11.595922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.730 [2024-11-25 12:28:11.595950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:25401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.730 [2024-11-25 12:28:11.595957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.988 [2024-11-25 12:28:11.607176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.988 [2024-11-25 12:28:11.607203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.988 [2024-11-25 12:28:11.607210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.988 [2024-11-25 12:28:11.618728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.988 [2024-11-25 12:28:11.618765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.988 [2024-11-25 12:28:11.618773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.988 [2024-11-25 12:28:11.632246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.988 [2024-11-25 12:28:11.632277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.988 [2024-11-25 12:28:11.632286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.988 [2024-11-25 12:28:11.643253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.988 [2024-11-25 12:28:11.643281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.988 [2024-11-25 12:28:11.643288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.988 [2024-11-25 12:28:11.654459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.988 [2024-11-25 12:28:11.654488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.988 [2024-11-25 12:28:11.654496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.988 [2024-11-25 12:28:11.665562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.988 [2024-11-25 12:28:11.665591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.988 [2024-11-25 12:28:11.665598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.988 [2024-11-25 12:28:11.676759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.988 [2024-11-25 12:28:11.676787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.988 [2024-11-25 12:28:11.676794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.988 [2024-11-25 12:28:11.688273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.988 [2024-11-25 12:28:11.688301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.988 [2024-11-25 12:28:11.688309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.988 [2024-11-25 12:28:11.699784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.988 [2024-11-25 12:28:11.699812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.988 [2024-11-25 12:28:11.699819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.988 [2024-11-25 12:28:11.710937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.988 [2024-11-25 12:28:11.710965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.988 [2024-11-25 12:28:11.710972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.988 [2024-11-25 12:28:11.721845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.989 [2024-11-25 12:28:11.721873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.989 [2024-11-25 12:28:11.721880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.989 [2024-11-25 12:28:11.732844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.989 [2024-11-25 12:28:11.732872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.989 [2024-11-25 12:28:11.732879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.989 [2024-11-25 12:28:11.743885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.989 [2024-11-25 12:28:11.743913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.989 [2024-11-25 12:28:11.743920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.989 [2024-11-25 12:28:11.754964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.989 [2024-11-25 12:28:11.754992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.989 [2024-11-25 12:28:11.755000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.989 [2024-11-25 12:28:11.766285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.989 [2024-11-25 12:28:11.766312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.989 [2024-11-25 12:28:11.766319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.989 [2024-11-25 12:28:11.777481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.989 [2024-11-25 12:28:11.777509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.989 [2024-11-25 12:28:11.777517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.989 [2024-11-25 12:28:11.789225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.989 [2024-11-25 12:28:11.789254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:11941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.989 [2024-11-25 12:28:11.789262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.989 [2024-11-25 12:28:11.800717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.989 [2024-11-25 12:28:11.800744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.989 [2024-11-25 12:28:11.800760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.989 [2024-11-25 12:28:11.812121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.989 [2024-11-25 12:28:11.812148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.989 [2024-11-25 12:28:11.812156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.989 [2024-11-25 12:28:11.822885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.989 [2024-11-25 12:28:11.822909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.989 [2024-11-25 12:28:11.822914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.989 [2024-11-25 12:28:11.832527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.989 [2024-11-25 12:28:11.832553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.989 [2024-11-25 12:28:11.832558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.989 [2024-11-25 12:28:11.842358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.989 [2024-11-25 12:28:11.842382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.989 [2024-11-25 12:28:11.842388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.989 [2024-11-25 12:28:11.852132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.989 [2024-11-25 12:28:11.852157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.989 [2024-11-25 12:28:11.852162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:15.989 [2024-11-25 12:28:11.862274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:15.989 [2024-11-25 12:28:11.862297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:15.989 [2024-11-25 12:28:11.862303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.247 [2024-11-25 12:28:11.871247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:16.247 [2024-11-25 12:28:11.871273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.247 [2024-11-25 12:28:11.871279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.247 [2024-11-25 12:28:11.881362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:16.247 [2024-11-25 12:28:11.881385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.247 [2024-11-25 12:28:11.881390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.247 [2024-11-25 12:28:11.891415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:16.247 [2024-11-25 12:28:11.891439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.247 [2024-11-25 12:28:11.891445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.247 [2024-11-25 12:28:11.901245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:16.247 [2024-11-25 12:28:11.901269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.247 [2024-11-25 12:28:11.901274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.247 [2024-11-25 12:28:11.911191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:16.247 [2024-11-25 12:28:11.911215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3050 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.247 [2024-11-25 12:28:11.911220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.248 [2024-11-25 12:28:11.921385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:16.248 [2024-11-25 12:28:11.921409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.248 [2024-11-25 12:28:11.921414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.248 [2024-11-25 12:28:11.931668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:16.248 [2024-11-25 12:28:11.931692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.248 [2024-11-25 12:28:11.931698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.248 [2024-11-25 12:28:11.941371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:16.248 [2024-11-25 12:28:11.941396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.248 [2024-11-25 12:28:11.941402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.248 [2024-11-25 12:28:11.950803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:16.248 [2024-11-25 12:28:11.950827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.248 [2024-11-25 12:28:11.950833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.248 [2024-11-25 12:28:11.960303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:16.248 [2024-11-25 12:28:11.960329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.248 [2024-11-25 12:28:11.960335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.248 [2024-11-25 12:28:11.971978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:16.248 [2024-11-25 12:28:11.972002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.248 [2024-11-25 12:28:11.972008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.248 [2024-11-25 12:28:11.981323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:16.248 [2024-11-25 12:28:11.981351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.248 [2024-11-25 12:28:11.981357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.248 [2024-11-25 12:28:11.990783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:16.248 [2024-11-25 12:28:11.990807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.248 [2024-11-25 12:28:11.990813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.248 [2024-11-25 12:28:12.000644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:16.248 [2024-11-25 12:28:12.000671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.248 [2024-11-25 12:28:12.000677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.248 [2024-11-25 12:28:12.010450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:16.248 [2024-11-25 12:28:12.010476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.248 [2024-11-25 12:28:12.010482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.248 [2024-11-25 12:28:12.019483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:16.248 [2024-11-25 12:28:12.019509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.248 [2024-11-25 12:28:12.019515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.248 [2024-11-25 12:28:12.029395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:16.248 [2024-11-25 12:28:12.029421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.248 [2024-11-25 12:28:12.029427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.248 [2024-11-25 12:28:12.040188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:16.248 [2024-11-25 12:28:12.040212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.248 [2024-11-25 12:28:12.040218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.248 [2024-11-25 12:28:12.050541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:16.248 [2024-11-25 12:28:12.050563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:24239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.248 [2024-11-25 12:28:12.050569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.248 [2024-11-25 12:28:12.060591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:16.248 [2024-11-25 12:28:12.060613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:6413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.248 [2024-11-25 12:28:12.060619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.248 [2024-11-25 12:28:12.069853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:16.248 [2024-11-25 12:28:12.069877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.248 [2024-11-25 12:28:12.069883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.248 [2024-11-25 12:28:12.079249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:16.248 [2024-11-25 12:28:12.079274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.248 [2024-11-25 12:28:12.079280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.248 [2024-11-25 12:28:12.088503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:16.248 [2024-11-25 12:28:12.088526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:18038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.248 [2024-11-25 12:28:12.088533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.248 [2024-11-25 12:28:12.099214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:16.248 [2024-11-25 12:28:12.099237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.248 [2024-11-25 12:28:12.099243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.248 [2024-11-25 12:28:12.108780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:16.248 [2024-11-25 12:28:12.108804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.248 [2024-11-25 12:28:12.108810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.248 [2024-11-25 12:28:12.120039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:16.249 [2024-11-25 12:28:12.120062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.249 [2024-11-25 12:28:12.120068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.507 [2024-11-25 12:28:12.130161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:16.507 [2024-11-25 12:28:12.130188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:26 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.507 [2024-11-25 12:28:12.130193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.507 [2024-11-25 12:28:12.140266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:16.507 [2024-11-25 12:28:12.140290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.507 [2024-11-25 12:28:12.140296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.507 [2024-11-25 12:28:12.150060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:16.507 [2024-11-25 12:28:12.150084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.507 [2024-11-25 12:28:12.150090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.507 [2024-11-25 12:28:12.159817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:16.507 [2024-11-25 12:28:12.159840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.507 [2024-11-25 12:28:12.159845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.507 [2024-11-25 12:28:12.169406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:16.507 [2024-11-25 12:28:12.169430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:24654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.507 [2024-11-25 12:28:12.169436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.507 [2024-11-25 12:28:12.179393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:16.507 [2024-11-25 12:28:12.179416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.507 [2024-11-25 12:28:12.179422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.507 [2024-11-25 12:28:12.189975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:16.507 [2024-11-25 12:28:12.190007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.507 [2024-11-25 12:28:12.190013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.507 [2024-11-25 12:28:12.200441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:16.507 [2024-11-25 12:28:12.200464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.507 [2024-11-25 12:28:12.200470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.507 [2024-11-25 12:28:12.210618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:16.507 [2024-11-25 12:28:12.210642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:1457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.507 [2024-11-25 12:28:12.210648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.507 [2024-11-25 12:28:12.220434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:16.507 [2024-11-25 12:28:12.220456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.507 [2024-11-25 12:28:12.220462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.507 [2024-11-25 12:28:12.230864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:16.507 [2024-11-25 12:28:12.230887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.507 [2024-11-25 12:28:12.230892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.507 [2024-11-25 12:28:12.240746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:16.507 [2024-11-25 12:28:12.240776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.507 [2024-11-25 12:28:12.240782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.507 [2024-11-25 12:28:12.250628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:16.507 [2024-11-25 12:28:12.250652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.508 [2024-11-25 12:28:12.250659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.508 [2024-11-25 12:28:12.260479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:16.508 [2024-11-25 12:28:12.260504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.508 [2024-11-25 12:28:12.260509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.508 [2024-11-25 12:28:12.270490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:16.508 [2024-11-25 12:28:12.270512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.508 [2024-11-25 12:28:12.270518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.508 [2024-11-25 12:28:12.280363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:16.508 [2024-11-25 12:28:12.280385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.508 [2024-11-25 12:28:12.280391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.508 [2024-11-25 12:28:12.291005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:16.508 [2024-11-25 12:28:12.291027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.508 [2024-11-25 12:28:12.291033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.508 [2024-11-25 12:28:12.299816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:16.508 [2024-11-25 12:28:12.299839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.508 [2024-11-25 12:28:12.299845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.508 [2024-11-25 12:28:12.308901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:16.508 [2024-11-25 12:28:12.308925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.508 [2024-11-25 12:28:12.308931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.508 [2024-11-25 12:28:12.319233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:16.508 [2024-11-25 12:28:12.319257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.508 [2024-11-25 12:28:12.319263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.508 [2024-11-25 12:28:12.329648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:16.508 [2024-11-25 12:28:12.329670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.508 [2024-11-25 12:28:12.329676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.508 [2024-11-25 12:28:12.338970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:16.508 [2024-11-25 12:28:12.338993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.508 [2024-11-25 12:28:12.338999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.508 [2024-11-25 12:28:12.349846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:16.508 [2024-11-25 12:28:12.349869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.508 [2024-11-25 12:28:12.349875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.508 [2024-11-25 12:28:12.359898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:16.508 [2024-11-25 12:28:12.359921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:7794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.508 [2024-11-25 12:28:12.359927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.508 23173.00 IOPS, 90.52 MiB/s [2024-11-25T12:28:12.384Z] [2024-11-25 12:28:12.369646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd97680) 00:29:16.508 [2024-11-25 12:28:12.369665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:18320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:16.508 [2024-11-25 12:28:12.369671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.508 00:29:16.508 Latency(us) 00:29:16.508 [2024-11-25T12:28:12.384Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:16.508 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:16.508 nvme0n1 : 2.00 23195.75 90.61 0.00 0.00 5512.80 2898.71 16736.89 00:29:16.508 [2024-11-25T12:28:12.384Z] =================================================================================================================== 00:29:16.508 [2024-11-25T12:28:12.384Z] Total : 23195.75 90.61 0.00 0.00 5512.80 2898.71 16736.89 00:29:16.508 { 00:29:16.508 "results": [ 00:29:16.508 { 00:29:16.508 "job": "nvme0n1", 00:29:16.508 "core_mask": "0x2", 00:29:16.508 "workload": "randread", 00:29:16.508 "status": "finished", 00:29:16.508 "queue_depth": 128, 00:29:16.508 "io_size": 4096, 00:29:16.508 "runtime": 2.003557, 00:29:16.508 "iops": 23195.74636508969, 00:29:16.508 "mibps": 90.6083842386316, 00:29:16.508 "io_failed": 0, 00:29:16.508 "io_timeout": 0, 00:29:16.508 "avg_latency_us": 5512.8010047636235, 00:29:16.508 "min_latency_us": 2898.7076923076925, 00:29:16.508 "max_latency_us": 16736.886153846153 00:29:16.508 } 00:29:16.508 ], 00:29:16.508 "core_count": 1 00:29:16.508 } 00:29:16.765 12:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:16.765 12:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:16.765 | .driver_specific 00:29:16.765 | .nvme_error 00:29:16.765 | .status_code 00:29:16.765 | .command_transient_transport_error' 00:29:16.765 12:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:16.765 12:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:16.765 12:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 182 > 0 )) 00:29:16.765 12:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93574 00:29:16.765 12:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 93574 ']' 00:29:16.765 12:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 93574 00:29:16.765 12:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:16.765 12:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:16.765 12:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93574 00:29:16.765 killing process with pid 93574 00:29:16.765 Received shutdown signal, test time was about 2.000000 seconds 00:29:16.765 00:29:16.765 Latency(us) 00:29:16.765 [2024-11-25T12:28:12.641Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:16.765 [2024-11-25T12:28:12.641Z] =================================================================================================================== 00:29:16.765 [2024-11-25T12:28:12.641Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:16.765 12:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:16.765 12:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:16.765 12:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93574' 00:29:16.765 12:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 93574 00:29:16.765 12:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 93574 00:29:17.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:17.022 12:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:17.022 12:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:17.022 12:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:17.022 12:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:17.022 12:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:17.022 12:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93657 00:29:17.022 12:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93657 /var/tmp/bperf.sock 00:29:17.022 12:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 93657 ']' 00:29:17.022 12:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:17.022 12:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:17.022 12:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:17.022 12:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:17.023 12:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:17.023 12:28:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:17.023 [2024-11-25 12:28:12.803364] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:29:17.023 [2024-11-25 12:28:12.803439] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93657 ] 00:29:17.023 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:17.023 Zero copy mechanism will not be used. 00:29:17.280 [2024-11-25 12:28:12.938926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:17.280 [2024-11-25 12:28:12.975924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:17.846 12:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:17.846 12:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:17.846 12:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:17.846 12:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:18.104 12:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:18.104 12:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.104 12:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:18.104 12:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.104 12:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:18.104 12:28:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:18.361 nvme0n1 00:29:18.361 12:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:18.361 12:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.361 12:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:18.361 12:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.361 12:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:18.361 12:28:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:18.621 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:18.621 Zero copy mechanism will not be used. 00:29:18.621 Running I/O for 2 seconds... 00:29:18.621 [2024-11-25 12:28:14.289588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.621 [2024-11-25 12:28:14.289636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.621 [2024-11-25 12:28:14.289644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.621 [2024-11-25 12:28:14.293006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.621 [2024-11-25 12:28:14.293035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.621 [2024-11-25 12:28:14.293043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.621 [2024-11-25 12:28:14.296253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.621 [2024-11-25 12:28:14.296280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.621 [2024-11-25 12:28:14.296286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.621 [2024-11-25 12:28:14.299540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.621 [2024-11-25 12:28:14.299567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.621 [2024-11-25 12:28:14.299573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.621 [2024-11-25 12:28:14.302780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.621 [2024-11-25 12:28:14.302806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.621 [2024-11-25 12:28:14.302812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.621 [2024-11-25 12:28:14.306324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.621 [2024-11-25 12:28:14.306351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.621 [2024-11-25 12:28:14.306358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.621 [2024-11-25 12:28:14.309591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.621 [2024-11-25 12:28:14.309617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.621 [2024-11-25 12:28:14.309623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.621 [2024-11-25 12:28:14.312837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.621 [2024-11-25 12:28:14.312861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.621 [2024-11-25 12:28:14.312867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.621 [2024-11-25 12:28:14.316071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.621 [2024-11-25 12:28:14.316098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.621 [2024-11-25 12:28:14.316104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.621 [2024-11-25 12:28:14.319257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.621 [2024-11-25 12:28:14.319283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.621 [2024-11-25 12:28:14.319289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.621 [2024-11-25 12:28:14.322367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.621 [2024-11-25 12:28:14.322392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.621 [2024-11-25 12:28:14.322398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.621 [2024-11-25 12:28:14.325448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.621 [2024-11-25 12:28:14.325474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.621 [2024-11-25 12:28:14.325479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.621 [2024-11-25 12:28:14.328461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.621 [2024-11-25 12:28:14.328487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.621 [2024-11-25 12:28:14.328493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.621 [2024-11-25 12:28:14.331604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.621 [2024-11-25 12:28:14.331631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.621 [2024-11-25 12:28:14.331637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.621 [2024-11-25 12:28:14.334743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.621 [2024-11-25 12:28:14.334779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.621 [2024-11-25 12:28:14.334785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.621 [2024-11-25 12:28:14.337923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.621 [2024-11-25 12:28:14.337948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.621 [2024-11-25 12:28:14.337954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.621 [2024-11-25 12:28:14.340914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.621 [2024-11-25 12:28:14.340940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.621 [2024-11-25 12:28:14.340946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.621 [2024-11-25 12:28:14.343968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.622 [2024-11-25 12:28:14.343994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.622 [2024-11-25 12:28:14.344000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.622 [2024-11-25 12:28:14.346941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.622 [2024-11-25 12:28:14.346966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.622 [2024-11-25 12:28:14.346972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.622 [2024-11-25 12:28:14.349953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.622 [2024-11-25 12:28:14.349978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.622 [2024-11-25 12:28:14.349984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.622 [2024-11-25 12:28:14.352978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.622 [2024-11-25 12:28:14.353003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.622 [2024-11-25 12:28:14.353009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.622 [2024-11-25 12:28:14.356029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.622 [2024-11-25 12:28:14.356055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.622 [2024-11-25 12:28:14.356061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.622 [2024-11-25 12:28:14.359037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.622 [2024-11-25 12:28:14.359062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.622 [2024-11-25 12:28:14.359068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.622 [2024-11-25 12:28:14.362090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.622 [2024-11-25 12:28:14.362116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.622 [2024-11-25 12:28:14.362122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.622 [2024-11-25 12:28:14.365105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.622 [2024-11-25 12:28:14.365130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.622 [2024-11-25 12:28:14.365136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.622 [2024-11-25 12:28:14.368044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.622 [2024-11-25 12:28:14.368070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.622 [2024-11-25 12:28:14.368076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.622 [2024-11-25 12:28:14.371076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.622 [2024-11-25 12:28:14.371102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.622 [2024-11-25 12:28:14.371108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.622 [2024-11-25 12:28:14.374076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.622 [2024-11-25 12:28:14.374102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.622 [2024-11-25 12:28:14.374108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.622 [2024-11-25 12:28:14.377090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.622 [2024-11-25 12:28:14.377116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.622 [2024-11-25 12:28:14.377122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.622 [2024-11-25 12:28:14.380162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.622 [2024-11-25 12:28:14.380188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.622 [2024-11-25 12:28:14.380194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.622 [2024-11-25 12:28:14.383182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.622 [2024-11-25 12:28:14.383208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.622 [2024-11-25 12:28:14.383214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.622 [2024-11-25 12:28:14.386202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.622 [2024-11-25 12:28:14.386227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.622 [2024-11-25 12:28:14.386233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.622 [2024-11-25 12:28:14.389228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.622 [2024-11-25 12:28:14.389254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.622 [2024-11-25 12:28:14.389260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.622 [2024-11-25 12:28:14.392187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.622 [2024-11-25 12:28:14.392213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.622 [2024-11-25 12:28:14.392219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.622 [2024-11-25 12:28:14.395266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.622 [2024-11-25 12:28:14.395292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.622 [2024-11-25 12:28:14.395298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.622 [2024-11-25 12:28:14.398387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.622 [2024-11-25 12:28:14.398412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.622 [2024-11-25 12:28:14.398418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.622 [2024-11-25 12:28:14.401438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.622 [2024-11-25 12:28:14.401463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.622 [2024-11-25 12:28:14.401469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.622 [2024-11-25 12:28:14.404468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.622 [2024-11-25 12:28:14.404494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.622 [2024-11-25 12:28:14.404500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.622 [2024-11-25 12:28:14.407597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.622 [2024-11-25 12:28:14.407623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.622 [2024-11-25 12:28:14.407629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.622 [2024-11-25 12:28:14.410657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.622 [2024-11-25 12:28:14.410682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.622 [2024-11-25 12:28:14.410688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.622 [2024-11-25 12:28:14.413705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.622 [2024-11-25 12:28:14.413730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.622 [2024-11-25 12:28:14.413736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.622 [2024-11-25 12:28:14.416743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.622 [2024-11-25 12:28:14.416777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.622 [2024-11-25 12:28:14.416782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.622 [2024-11-25 12:28:14.419787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.622 [2024-11-25 12:28:14.419812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.622 [2024-11-25 12:28:14.419817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.622 [2024-11-25 12:28:14.422840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.622 [2024-11-25 12:28:14.422864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.622 [2024-11-25 12:28:14.422870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.622 [2024-11-25 12:28:14.425849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.622 [2024-11-25 12:28:14.425873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.623 [2024-11-25 12:28:14.425879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.623 [2024-11-25 12:28:14.428870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.623 [2024-11-25 12:28:14.428895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.623 [2024-11-25 12:28:14.428901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.623 [2024-11-25 12:28:14.431894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.623 [2024-11-25 12:28:14.431919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.623 [2024-11-25 12:28:14.431925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.623 [2024-11-25 12:28:14.434923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.623 [2024-11-25 12:28:14.434949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.623 [2024-11-25 12:28:14.434955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.623 [2024-11-25 12:28:14.437893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.623 [2024-11-25 12:28:14.437917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.623 [2024-11-25 12:28:14.437923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.623 [2024-11-25 12:28:14.440912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.623 [2024-11-25 12:28:14.440937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.623 [2024-11-25 12:28:14.440944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.623 [2024-11-25 12:28:14.443920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.623 [2024-11-25 12:28:14.443944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.623 [2024-11-25 12:28:14.443951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.623 [2024-11-25 12:28:14.446853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.623 [2024-11-25 12:28:14.446877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.623 [2024-11-25 12:28:14.446882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.623 [2024-11-25 12:28:14.449909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.623 [2024-11-25 12:28:14.449933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.623 [2024-11-25 12:28:14.449939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.623 [2024-11-25 12:28:14.452966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.623 [2024-11-25 12:28:14.452993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.623 [2024-11-25 12:28:14.452998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.623 [2024-11-25 12:28:14.456034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.623 [2024-11-25 12:28:14.456060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.623 [2024-11-25 12:28:14.456065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.623 [2024-11-25 12:28:14.459038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.623 [2024-11-25 12:28:14.459065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.623 [2024-11-25 12:28:14.459071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.623 [2024-11-25 12:28:14.462089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.623 [2024-11-25 12:28:14.462115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.623 [2024-11-25 12:28:14.462121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.623 [2024-11-25 12:28:14.465093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.623 [2024-11-25 12:28:14.465118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.623 [2024-11-25 12:28:14.465124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.623 [2024-11-25 12:28:14.468062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.623 [2024-11-25 12:28:14.468087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.623 [2024-11-25 12:28:14.468093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.623 [2024-11-25 12:28:14.471179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.623 [2024-11-25 12:28:14.471206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.623 [2024-11-25 12:28:14.471213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.623 [2024-11-25 12:28:14.474351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.623 [2024-11-25 12:28:14.474376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.623 [2024-11-25 12:28:14.474382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.623 [2024-11-25 12:28:14.477263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.623 [2024-11-25 12:28:14.477288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.623 [2024-11-25 12:28:14.477294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.623 [2024-11-25 12:28:14.480343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.623 [2024-11-25 12:28:14.480369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.623 [2024-11-25 12:28:14.480375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.623 [2024-11-25 12:28:14.483334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.623 [2024-11-25 12:28:14.483361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.623 [2024-11-25 12:28:14.483367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.623 [2024-11-25 12:28:14.486260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.623 [2024-11-25 12:28:14.486285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.623 [2024-11-25 12:28:14.486291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.623 [2024-11-25 12:28:14.489264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.623 [2024-11-25 12:28:14.489289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.623 [2024-11-25 12:28:14.489295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.623 [2024-11-25 12:28:14.492457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.623 [2024-11-25 12:28:14.492484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.623 [2024-11-25 12:28:14.492490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.883 [2024-11-25 12:28:14.495463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.883 [2024-11-25 12:28:14.495489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.883 [2024-11-25 12:28:14.495495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.883 [2024-11-25 12:28:14.498567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.883 [2024-11-25 12:28:14.498593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.883 [2024-11-25 12:28:14.498599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.883 [2024-11-25 12:28:14.501452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.883 [2024-11-25 12:28:14.501478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.883 [2024-11-25 12:28:14.501484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.883 [2024-11-25 12:28:14.504577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.883 [2024-11-25 12:28:14.504602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.883 [2024-11-25 12:28:14.504609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.883 [2024-11-25 12:28:14.507666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.883 [2024-11-25 12:28:14.507693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.883 [2024-11-25 12:28:14.507699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.883 [2024-11-25 12:28:14.510763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.883 [2024-11-25 12:28:14.510787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.883 [2024-11-25 12:28:14.510794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.883 [2024-11-25 12:28:14.513784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.883 [2024-11-25 12:28:14.513808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.884 [2024-11-25 12:28:14.513814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.884 [2024-11-25 12:28:14.516851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.884 [2024-11-25 12:28:14.516877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.884 [2024-11-25 12:28:14.516882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.884 [2024-11-25 12:28:14.519928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.884 [2024-11-25 12:28:14.519953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.884 [2024-11-25 12:28:14.519959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.884 [2024-11-25 12:28:14.523028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.884 [2024-11-25 12:28:14.523055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.884 [2024-11-25 12:28:14.523061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.884 [2024-11-25 12:28:14.526026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.884 [2024-11-25 12:28:14.526051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.884 [2024-11-25 12:28:14.526057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.884 [2024-11-25 12:28:14.529096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.884 [2024-11-25 12:28:14.529123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.884 [2024-11-25 12:28:14.529129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.884 [2024-11-25 12:28:14.532211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.884 [2024-11-25 12:28:14.532236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.884 [2024-11-25 12:28:14.532242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.884 [2024-11-25 12:28:14.535158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.884 [2024-11-25 12:28:14.535184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.884 [2024-11-25 12:28:14.535191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.884 [2024-11-25 12:28:14.538138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.884 [2024-11-25 12:28:14.538169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.884 [2024-11-25 12:28:14.538175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.884 [2024-11-25 12:28:14.541145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.884 [2024-11-25 12:28:14.541170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.884 [2024-11-25 12:28:14.541176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.884 [2024-11-25 12:28:14.544152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.884 [2024-11-25 12:28:14.544178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.884 [2024-11-25 12:28:14.544184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.884 [2024-11-25 12:28:14.547237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.884 [2024-11-25 12:28:14.547263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.884 [2024-11-25 12:28:14.547269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.884 [2024-11-25 12:28:14.550260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.884 [2024-11-25 12:28:14.550285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.884 [2024-11-25 12:28:14.550291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.884 [2024-11-25 12:28:14.553192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.884 [2024-11-25 12:28:14.553217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.884 [2024-11-25 12:28:14.553223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.884 [2024-11-25 12:28:14.555977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.884 [2024-11-25 12:28:14.556002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.884 [2024-11-25 12:28:14.556007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.884 [2024-11-25 12:28:14.558952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.884 [2024-11-25 12:28:14.558977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.884 [2024-11-25 12:28:14.558983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.884 [2024-11-25 12:28:14.561839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.884 [2024-11-25 12:28:14.561863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.884 [2024-11-25 12:28:14.561868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.884 [2024-11-25 12:28:14.564762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.884 [2024-11-25 12:28:14.564786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.884 [2024-11-25 12:28:14.564792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.884 [2024-11-25 12:28:14.567868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.884 [2024-11-25 12:28:14.567895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.884 [2024-11-25 12:28:14.567901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.884 [2024-11-25 12:28:14.570863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.884 [2024-11-25 12:28:14.570889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.884 [2024-11-25 12:28:14.570895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.884 [2024-11-25 12:28:14.573956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.884 [2024-11-25 12:28:14.573982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.884 [2024-11-25 12:28:14.573988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.884 [2024-11-25 12:28:14.577012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.884 [2024-11-25 12:28:14.577037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.884 [2024-11-25 12:28:14.577043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.884 [2024-11-25 12:28:14.580121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.884 [2024-11-25 12:28:14.580147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.884 [2024-11-25 12:28:14.580152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.884 [2024-11-25 12:28:14.583156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.884 [2024-11-25 12:28:14.583182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.884 [2024-11-25 12:28:14.583188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.884 [2024-11-25 12:28:14.586111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.884 [2024-11-25 12:28:14.586136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.884 [2024-11-25 12:28:14.586149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.884 [2024-11-25 12:28:14.589186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.884 [2024-11-25 12:28:14.589211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.884 [2024-11-25 12:28:14.589217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.884 [2024-11-25 12:28:14.592218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.884 [2024-11-25 12:28:14.592244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.884 [2024-11-25 12:28:14.592250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.884 [2024-11-25 12:28:14.595268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.885 [2024-11-25 12:28:14.595295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.885 [2024-11-25 12:28:14.595301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.885 [2024-11-25 12:28:14.598325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.885 [2024-11-25 12:28:14.598351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.885 [2024-11-25 12:28:14.598358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.885 [2024-11-25 12:28:14.601318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.885 [2024-11-25 12:28:14.601344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.885 [2024-11-25 12:28:14.601350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.885 [2024-11-25 12:28:14.604382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.885 [2024-11-25 12:28:14.604408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.885 [2024-11-25 12:28:14.604414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.885 [2024-11-25 12:28:14.607419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.885 [2024-11-25 12:28:14.607444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.885 [2024-11-25 12:28:14.607451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.885 [2024-11-25 12:28:14.610446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.885 [2024-11-25 12:28:14.610472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.885 [2024-11-25 12:28:14.610479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.885 [2024-11-25 12:28:14.613498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.885 [2024-11-25 12:28:14.613524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.885 [2024-11-25 12:28:14.613530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.885 [2024-11-25 12:28:14.616502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.885 [2024-11-25 12:28:14.616529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.885 [2024-11-25 12:28:14.616535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.885 [2024-11-25 12:28:14.619687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.885 [2024-11-25 12:28:14.619714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.885 [2024-11-25 12:28:14.619719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.885 [2024-11-25 12:28:14.622700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.885 [2024-11-25 12:28:14.622725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.885 [2024-11-25 12:28:14.622731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.885 [2024-11-25 12:28:14.625567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.885 [2024-11-25 12:28:14.625593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.885 [2024-11-25 12:28:14.625599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.885 [2024-11-25 12:28:14.628578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.885 [2024-11-25 12:28:14.628604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.885 [2024-11-25 12:28:14.628610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.885 [2024-11-25 12:28:14.631584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.885 [2024-11-25 12:28:14.631610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.885 [2024-11-25 12:28:14.631616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.885 [2024-11-25 12:28:14.634610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.885 [2024-11-25 12:28:14.634635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.885 [2024-11-25 12:28:14.634642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.885 [2024-11-25 12:28:14.637611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.885 [2024-11-25 12:28:14.637637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.885 [2024-11-25 12:28:14.637643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.885 [2024-11-25 12:28:14.640618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.885 [2024-11-25 12:28:14.640644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.885 [2024-11-25 12:28:14.640650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.885 [2024-11-25 12:28:14.643652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.885 [2024-11-25 12:28:14.643677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.885 [2024-11-25 12:28:14.643684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.885 [2024-11-25 12:28:14.646647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.885 [2024-11-25 12:28:14.646673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.885 [2024-11-25 12:28:14.646679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.885 [2024-11-25 12:28:14.649529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.885 [2024-11-25 12:28:14.649554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.885 [2024-11-25 12:28:14.649560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.885 [2024-11-25 12:28:14.652547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.885 [2024-11-25 12:28:14.652572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.885 [2024-11-25 12:28:14.652578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.885 [2024-11-25 12:28:14.655545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.885 [2024-11-25 12:28:14.655570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.885 [2024-11-25 12:28:14.655577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.885 [2024-11-25 12:28:14.658574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.885 [2024-11-25 12:28:14.658599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.885 [2024-11-25 12:28:14.658605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.885 [2024-11-25 12:28:14.661536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.885 [2024-11-25 12:28:14.661562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.885 [2024-11-25 12:28:14.661568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.885 [2024-11-25 12:28:14.664549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.885 [2024-11-25 12:28:14.664574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.885 [2024-11-25 12:28:14.664580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.885 [2024-11-25 12:28:14.667651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.885 [2024-11-25 12:28:14.667678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.885 [2024-11-25 12:28:14.667683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.885 [2024-11-25 12:28:14.670652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.885 [2024-11-25 12:28:14.670678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.885 [2024-11-25 12:28:14.670684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.885 [2024-11-25 12:28:14.673665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.885 [2024-11-25 12:28:14.673690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.885 [2024-11-25 12:28:14.673696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.885 [2024-11-25 12:28:14.676682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.885 [2024-11-25 12:28:14.676709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.886 [2024-11-25 12:28:14.676715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.886 [2024-11-25 12:28:14.679721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.886 [2024-11-25 12:28:14.679758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.886 [2024-11-25 12:28:14.679765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.886 [2024-11-25 12:28:14.682720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.886 [2024-11-25 12:28:14.682746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.886 [2024-11-25 12:28:14.682764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.886 [2024-11-25 12:28:14.685653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.886 [2024-11-25 12:28:14.685679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.886 [2024-11-25 12:28:14.685684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.886 [2024-11-25 12:28:14.688668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.886 [2024-11-25 12:28:14.688694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.886 [2024-11-25 12:28:14.688700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.886 [2024-11-25 12:28:14.691662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.886 [2024-11-25 12:28:14.691688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.886 [2024-11-25 12:28:14.691694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.886 [2024-11-25 12:28:14.694717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.886 [2024-11-25 12:28:14.694744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.886 [2024-11-25 12:28:14.694761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.886 [2024-11-25 12:28:14.697832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.886 [2024-11-25 12:28:14.697857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.886 [2024-11-25 12:28:14.697863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.886 [2024-11-25 12:28:14.700932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.886 [2024-11-25 12:28:14.700958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.886 [2024-11-25 12:28:14.700964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.886 [2024-11-25 12:28:14.703822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.886 [2024-11-25 12:28:14.703846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.886 [2024-11-25 12:28:14.703852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.886 [2024-11-25 12:28:14.706956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.886 [2024-11-25 12:28:14.706982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.886 [2024-11-25 12:28:14.706988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.886 [2024-11-25 12:28:14.710072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.886 [2024-11-25 12:28:14.710098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.886 [2024-11-25 12:28:14.710104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.886 [2024-11-25 12:28:14.713219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.886 [2024-11-25 12:28:14.713246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.886 [2024-11-25 12:28:14.713252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.886 [2024-11-25 12:28:14.716339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.886 [2024-11-25 12:28:14.716365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.886 [2024-11-25 12:28:14.716371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.886 [2024-11-25 12:28:14.719382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.886 [2024-11-25 12:28:14.719407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.886 [2024-11-25 12:28:14.719414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.886 [2024-11-25 12:28:14.722427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.886 [2024-11-25 12:28:14.722453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.886 [2024-11-25 12:28:14.722459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.886 [2024-11-25 12:28:14.725519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.886 [2024-11-25 12:28:14.725545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.886 [2024-11-25 12:28:14.725551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.886 [2024-11-25 12:28:14.728670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.886 [2024-11-25 12:28:14.728697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.886 [2024-11-25 12:28:14.728703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.886 [2024-11-25 12:28:14.731685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.886 [2024-11-25 12:28:14.731711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.886 [2024-11-25 12:28:14.731717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.886 [2024-11-25 12:28:14.734730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.886 [2024-11-25 12:28:14.734768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.886 [2024-11-25 12:28:14.734774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.886 [2024-11-25 12:28:14.737828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.886 [2024-11-25 12:28:14.737852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.886 [2024-11-25 12:28:14.737858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.886 [2024-11-25 12:28:14.740867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.886 [2024-11-25 12:28:14.740891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.886 [2024-11-25 12:28:14.740898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.886 [2024-11-25 12:28:14.744074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.886 [2024-11-25 12:28:14.744099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.886 [2024-11-25 12:28:14.744105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.886 [2024-11-25 12:28:14.747226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.886 [2024-11-25 12:28:14.747252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.886 [2024-11-25 12:28:14.747258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.886 [2024-11-25 12:28:14.750378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.886 [2024-11-25 12:28:14.750404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.886 [2024-11-25 12:28:14.750410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.886 [2024-11-25 12:28:14.753577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:18.886 [2024-11-25 12:28:14.753603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.886 [2024-11-25 12:28:14.753609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.147 [2024-11-25 12:28:14.756660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.147 [2024-11-25 12:28:14.756686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.147 [2024-11-25 12:28:14.756692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.147 [2024-11-25 12:28:14.759763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.147 [2024-11-25 12:28:14.759787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.147 [2024-11-25 12:28:14.759793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.147 [2024-11-25 12:28:14.762875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.147 [2024-11-25 12:28:14.762901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.147 [2024-11-25 12:28:14.762907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.147 [2024-11-25 12:28:14.765980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.147 [2024-11-25 12:28:14.766005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.147 [2024-11-25 12:28:14.766012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.147 [2024-11-25 12:28:14.769136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.147 [2024-11-25 12:28:14.769162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.147 [2024-11-25 12:28:14.769168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.147 [2024-11-25 12:28:14.772264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.147 [2024-11-25 12:28:14.772290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.147 [2024-11-25 12:28:14.772296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.147 [2024-11-25 12:28:14.775295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.147 [2024-11-25 12:28:14.775321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.147 [2024-11-25 12:28:14.775327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.147 [2024-11-25 12:28:14.778347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.147 [2024-11-25 12:28:14.778373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.147 [2024-11-25 12:28:14.778380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.147 [2024-11-25 12:28:14.781412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.147 [2024-11-25 12:28:14.781438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.147 [2024-11-25 12:28:14.781444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.147 [2024-11-25 12:28:14.784541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.147 [2024-11-25 12:28:14.784566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.147 [2024-11-25 12:28:14.784572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.147 [2024-11-25 12:28:14.787572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.147 [2024-11-25 12:28:14.787598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.147 [2024-11-25 12:28:14.787604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.147 [2024-11-25 12:28:14.790465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.147 [2024-11-25 12:28:14.790491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.147 [2024-11-25 12:28:14.790497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.147 [2024-11-25 12:28:14.792309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.147 [2024-11-25 12:28:14.792334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.147 [2024-11-25 12:28:14.792340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.147 [2024-11-25 12:28:14.795343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.147 [2024-11-25 12:28:14.795369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.147 [2024-11-25 12:28:14.795375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.147 [2024-11-25 12:28:14.798378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.147 [2024-11-25 12:28:14.798404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.147 [2024-11-25 12:28:14.798410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.147 [2024-11-25 12:28:14.800524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.147 [2024-11-25 12:28:14.800549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.147 [2024-11-25 12:28:14.800555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.147 [2024-11-25 12:28:14.803076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.147 [2024-11-25 12:28:14.803101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.147 [2024-11-25 12:28:14.803107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.147 [2024-11-25 12:28:14.806111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.147 [2024-11-25 12:28:14.806138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.147 [2024-11-25 12:28:14.806151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.147 [2024-11-25 12:28:14.808204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.147 [2024-11-25 12:28:14.808230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.147 [2024-11-25 12:28:14.808236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.147 [2024-11-25 12:28:14.810770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.147 [2024-11-25 12:28:14.810794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.147 [2024-11-25 12:28:14.810800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.147 [2024-11-25 12:28:14.813477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.147 [2024-11-25 12:28:14.813503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.147 [2024-11-25 12:28:14.813509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.147 [2024-11-25 12:28:14.815484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.147 [2024-11-25 12:28:14.815509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.147 [2024-11-25 12:28:14.815515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.147 [2024-11-25 12:28:14.818332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.148 [2024-11-25 12:28:14.818359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.148 [2024-11-25 12:28:14.818365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.148 [2024-11-25 12:28:14.820408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.148 [2024-11-25 12:28:14.820434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.148 [2024-11-25 12:28:14.820440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.148 [2024-11-25 12:28:14.823045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.148 [2024-11-25 12:28:14.823071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.148 [2024-11-25 12:28:14.823077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.148 [2024-11-25 12:28:14.825830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.148 [2024-11-25 12:28:14.825854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.148 [2024-11-25 12:28:14.825860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.148 [2024-11-25 12:28:14.827786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.148 [2024-11-25 12:28:14.827811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.148 [2024-11-25 12:28:14.827817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.148 [2024-11-25 12:28:14.830088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.148 [2024-11-25 12:28:14.830113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.148 [2024-11-25 12:28:14.830119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.148 [2024-11-25 12:28:14.832709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.148 [2024-11-25 12:28:14.832736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.148 [2024-11-25 12:28:14.832742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.148 [2024-11-25 12:28:14.835296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.148 [2024-11-25 12:28:14.835321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.148 [2024-11-25 12:28:14.835327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.148 [2024-11-25 12:28:14.837218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.148 [2024-11-25 12:28:14.837243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.148 [2024-11-25 12:28:14.837249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.148 [2024-11-25 12:28:14.840320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.148 [2024-11-25 12:28:14.840348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.148 [2024-11-25 12:28:14.840355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.148 [2024-11-25 12:28:14.842572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.148 [2024-11-25 12:28:14.842597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.148 [2024-11-25 12:28:14.842603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.148 [2024-11-25 12:28:14.845318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.148 [2024-11-25 12:28:14.845343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.148 [2024-11-25 12:28:14.845349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.148 [2024-11-25 12:28:14.848312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.148 [2024-11-25 12:28:14.848338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.148 [2024-11-25 12:28:14.848343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.148 [2024-11-25 12:28:14.850444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.148 [2024-11-25 12:28:14.850470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.148 [2024-11-25 12:28:14.850476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.148 [2024-11-25 12:28:14.852877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.148 [2024-11-25 12:28:14.852902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.148 [2024-11-25 12:28:14.852908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.148 [2024-11-25 12:28:14.855227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.148 [2024-11-25 12:28:14.855252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.148 [2024-11-25 12:28:14.855258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.148 [2024-11-25 12:28:14.857388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.148 [2024-11-25 12:28:14.857412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.148 [2024-11-25 12:28:14.857419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.148 [2024-11-25 12:28:14.859663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.148 [2024-11-25 12:28:14.859688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.148 [2024-11-25 12:28:14.859694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.148 [2024-11-25 12:28:14.861960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.148 [2024-11-25 12:28:14.861986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.148 [2024-11-25 12:28:14.861992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.148 [2024-11-25 12:28:14.864381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.148 [2024-11-25 12:28:14.864407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.148 [2024-11-25 12:28:14.864413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.148 [2024-11-25 12:28:14.866451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.148 [2024-11-25 12:28:14.866477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.148 [2024-11-25 12:28:14.866483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.148 [2024-11-25 12:28:14.868827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.148 [2024-11-25 12:28:14.868851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.148 [2024-11-25 12:28:14.868857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.148 [2024-11-25 12:28:14.870858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.148 [2024-11-25 12:28:14.870883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.148 [2024-11-25 12:28:14.870889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.148 [2024-11-25 12:28:14.873536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.148 [2024-11-25 12:28:14.873561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.148 [2024-11-25 12:28:14.873567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.148 [2024-11-25 12:28:14.875647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.148 [2024-11-25 12:28:14.875675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.148 [2024-11-25 12:28:14.875681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.148 [2024-11-25 12:28:14.877761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.148 [2024-11-25 12:28:14.877784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.148 [2024-11-25 12:28:14.877790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.148 [2024-11-25 12:28:14.880578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.148 [2024-11-25 12:28:14.880604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.148 [2024-11-25 12:28:14.880609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.149 [2024-11-25 12:28:14.882690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.149 [2024-11-25 12:28:14.882716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.149 [2024-11-25 12:28:14.882721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.149 [2024-11-25 12:28:14.885287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.149 [2024-11-25 12:28:14.885313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.149 [2024-11-25 12:28:14.885318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.149 [2024-11-25 12:28:14.887969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.149 [2024-11-25 12:28:14.887994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.149 [2024-11-25 12:28:14.888001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.149 [2024-11-25 12:28:14.889897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.149 [2024-11-25 12:28:14.889922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.149 [2024-11-25 12:28:14.889927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.149 [2024-11-25 12:28:14.892850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.149 [2024-11-25 12:28:14.892876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.149 [2024-11-25 12:28:14.892882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.149 [2024-11-25 12:28:14.895104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.149 [2024-11-25 12:28:14.895129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.149 [2024-11-25 12:28:14.895135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.149 [2024-11-25 12:28:14.896997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.149 [2024-11-25 12:28:14.897023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.149 [2024-11-25 12:28:14.897028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.149 [2024-11-25 12:28:14.899479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.149 [2024-11-25 12:28:14.899505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.149 [2024-11-25 12:28:14.899511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.149 [2024-11-25 12:28:14.901631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.149 [2024-11-25 12:28:14.901656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.149 [2024-11-25 12:28:14.901662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.149 [2024-11-25 12:28:14.904451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.149 [2024-11-25 12:28:14.904477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.149 [2024-11-25 12:28:14.904484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.149 [2024-11-25 12:28:14.906545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.149 [2024-11-25 12:28:14.906571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.149 [2024-11-25 12:28:14.906577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.149 [2024-11-25 12:28:14.909085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.149 [2024-11-25 12:28:14.909111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.149 [2024-11-25 12:28:14.909117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.149 [2024-11-25 12:28:14.911782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.149 [2024-11-25 12:28:14.911806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.149 [2024-11-25 12:28:14.911812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.149 [2024-11-25 12:28:14.913694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.149 [2024-11-25 12:28:14.913720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.149 [2024-11-25 12:28:14.913726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.149 [2024-11-25 12:28:14.916449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.149 [2024-11-25 12:28:14.916476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.149 [2024-11-25 12:28:14.916483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.149 [2024-11-25 12:28:14.918856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.149 [2024-11-25 12:28:14.918882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.149 [2024-11-25 12:28:14.918888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.149 [2024-11-25 12:28:14.920746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.149 [2024-11-25 12:28:14.920784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.149 [2024-11-25 12:28:14.920790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.149 [2024-11-25 12:28:14.923196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.149 [2024-11-25 12:28:14.923222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.149 [2024-11-25 12:28:14.923228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.149 [2024-11-25 12:28:14.925160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.149 [2024-11-25 12:28:14.925185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.149 [2024-11-25 12:28:14.925191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.149 [2024-11-25 12:28:14.927495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.149 [2024-11-25 12:28:14.927521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.149 [2024-11-25 12:28:14.927527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.149 [2024-11-25 12:28:14.930029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.149 [2024-11-25 12:28:14.930054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.149 [2024-11-25 12:28:14.930061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.149 [2024-11-25 12:28:14.931948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.149 [2024-11-25 12:28:14.931972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.149 [2024-11-25 12:28:14.931978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.149 [2024-11-25 12:28:14.934520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.149 [2024-11-25 12:28:14.934546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.149 [2024-11-25 12:28:14.934552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.149 [2024-11-25 12:28:14.936629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.149 [2024-11-25 12:28:14.936654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.149 [2024-11-25 12:28:14.936660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.149 [2024-11-25 12:28:14.938768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.149 [2024-11-25 12:28:14.938792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.149 [2024-11-25 12:28:14.938798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.149 [2024-11-25 12:28:14.941278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.149 [2024-11-25 12:28:14.941304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.149 [2024-11-25 12:28:14.941310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.149 [2024-11-25 12:28:14.943928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.149 [2024-11-25 12:28:14.943953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.150 [2024-11-25 12:28:14.943959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.150 [2024-11-25 12:28:14.945820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.150 [2024-11-25 12:28:14.945843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.150 [2024-11-25 12:28:14.945849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.150 [2024-11-25 12:28:14.948801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.150 [2024-11-25 12:28:14.948826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.150 [2024-11-25 12:28:14.948832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.150 [2024-11-25 12:28:14.950996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.150 [2024-11-25 12:28:14.951020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.150 [2024-11-25 12:28:14.951026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.150 [2024-11-25 12:28:14.953542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.150 [2024-11-25 12:28:14.953568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.150 [2024-11-25 12:28:14.953574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.150 [2024-11-25 12:28:14.956628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.150 [2024-11-25 12:28:14.956654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.150 [2024-11-25 12:28:14.956660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.150 [2024-11-25 12:28:14.959732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.150 [2024-11-25 12:28:14.959770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.150 [2024-11-25 12:28:14.959776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.150 [2024-11-25 12:28:14.961973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.150 [2024-11-25 12:28:14.961997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.150 [2024-11-25 12:28:14.962002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.150 [2024-11-25 12:28:14.964438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.150 [2024-11-25 12:28:14.964464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.150 [2024-11-25 12:28:14.964470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.150 [2024-11-25 12:28:14.967480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.150 [2024-11-25 12:28:14.967505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.150 [2024-11-25 12:28:14.967511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.150 [2024-11-25 12:28:14.970556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.150 [2024-11-25 12:28:14.970582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.150 [2024-11-25 12:28:14.970588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.150 [2024-11-25 12:28:14.972733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.150 [2024-11-25 12:28:14.972771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.150 [2024-11-25 12:28:14.972777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.150 [2024-11-25 12:28:14.975315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.150 [2024-11-25 12:28:14.975341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.150 [2024-11-25 12:28:14.975347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.150 [2024-11-25 12:28:14.978294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.150 [2024-11-25 12:28:14.978318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.150 [2024-11-25 12:28:14.978324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.150 [2024-11-25 12:28:14.980378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.150 [2024-11-25 12:28:14.980404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.150 [2024-11-25 12:28:14.980410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.150 [2024-11-25 12:28:14.982891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.150 [2024-11-25 12:28:14.982916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.150 [2024-11-25 12:28:14.982922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.150 [2024-11-25 12:28:14.985226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.150 [2024-11-25 12:28:14.985252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.150 [2024-11-25 12:28:14.985258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.150 [2024-11-25 12:28:14.987309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.150 [2024-11-25 12:28:14.987335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.150 [2024-11-25 12:28:14.987341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.150 [2024-11-25 12:28:14.989631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.150 [2024-11-25 12:28:14.989657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.150 [2024-11-25 12:28:14.989662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.150 [2024-11-25 12:28:14.991777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.150 [2024-11-25 12:28:14.991802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.150 [2024-11-25 12:28:14.991808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.150 [2024-11-25 12:28:14.993914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.150 [2024-11-25 12:28:14.993938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.150 [2024-11-25 12:28:14.993945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.150 [2024-11-25 12:28:14.995979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.150 [2024-11-25 12:28:14.996004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.150 [2024-11-25 12:28:14.996010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.150 [2024-11-25 12:28:14.998232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.150 [2024-11-25 12:28:14.998258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.150 [2024-11-25 12:28:14.998264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.150 [2024-11-25 12:28:15.000555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.150 [2024-11-25 12:28:15.000580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.150 [2024-11-25 12:28:15.000586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.150 [2024-11-25 12:28:15.002604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.150 [2024-11-25 12:28:15.002630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.150 [2024-11-25 12:28:15.002636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.150 [2024-11-25 12:28:15.005029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.150 [2024-11-25 12:28:15.005054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.150 [2024-11-25 12:28:15.005060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.150 [2024-11-25 12:28:15.007349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.150 [2024-11-25 12:28:15.007373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.150 [2024-11-25 12:28:15.007379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.150 [2024-11-25 12:28:15.009679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.150 [2024-11-25 12:28:15.009704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.151 [2024-11-25 12:28:15.009710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.151 [2024-11-25 12:28:15.011828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.151 [2024-11-25 12:28:15.011852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.151 [2024-11-25 12:28:15.011858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.151 [2024-11-25 12:28:15.013980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.151 [2024-11-25 12:28:15.014005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.151 [2024-11-25 12:28:15.014011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.151 [2024-11-25 12:28:15.016322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.151 [2024-11-25 12:28:15.016347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.151 [2024-11-25 12:28:15.016353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.151 [2024-11-25 12:28:15.018765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.151 [2024-11-25 12:28:15.018789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.151 [2024-11-25 12:28:15.018795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.413 [2024-11-25 12:28:15.020702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.413 [2024-11-25 12:28:15.020727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.413 [2024-11-25 12:28:15.020733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.413 [2024-11-25 12:28:15.023225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.413 [2024-11-25 12:28:15.023251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.413 [2024-11-25 12:28:15.023256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.413 [2024-11-25 12:28:15.025364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.413 [2024-11-25 12:28:15.025389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.413 [2024-11-25 12:28:15.025395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.413 [2024-11-25 12:28:15.027662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.413 [2024-11-25 12:28:15.027688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.413 [2024-11-25 12:28:15.027693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.413 [2024-11-25 12:28:15.029584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.413 [2024-11-25 12:28:15.029609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.413 [2024-11-25 12:28:15.029615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.413 [2024-11-25 12:28:15.032465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.413 [2024-11-25 12:28:15.032492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.413 [2024-11-25 12:28:15.032498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.413 [2024-11-25 12:28:15.034606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.413 [2024-11-25 12:28:15.034632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.413 [2024-11-25 12:28:15.034638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.413 [2024-11-25 12:28:15.037116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.413 [2024-11-25 12:28:15.037141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.413 [2024-11-25 12:28:15.037147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.413 [2024-11-25 12:28:15.039326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.413 [2024-11-25 12:28:15.039352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.413 [2024-11-25 12:28:15.039358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.413 [2024-11-25 12:28:15.042073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.413 [2024-11-25 12:28:15.042098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.413 [2024-11-25 12:28:15.042104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.413 [2024-11-25 12:28:15.044048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.413 [2024-11-25 12:28:15.044073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.413 [2024-11-25 12:28:15.044079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.413 [2024-11-25 12:28:15.046888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.413 [2024-11-25 12:28:15.046913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.413 [2024-11-25 12:28:15.046920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.413 [2024-11-25 12:28:15.048986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.413 [2024-11-25 12:28:15.049010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.413 [2024-11-25 12:28:15.049016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.413 [2024-11-25 12:28:15.051522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.413 [2024-11-25 12:28:15.051547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.413 [2024-11-25 12:28:15.051553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.413 [2024-11-25 12:28:15.054246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.413 [2024-11-25 12:28:15.054271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.413 [2024-11-25 12:28:15.054277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.413 [2024-11-25 12:28:15.056244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.413 [2024-11-25 12:28:15.056269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.413 [2024-11-25 12:28:15.056275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.413 [2024-11-25 12:28:15.059113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.413 [2024-11-25 12:28:15.059140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.413 [2024-11-25 12:28:15.059146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.413 [2024-11-25 12:28:15.061234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.413 [2024-11-25 12:28:15.061259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.413 [2024-11-25 12:28:15.061265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.413 [2024-11-25 12:28:15.063243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.413 [2024-11-25 12:28:15.063271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.413 [2024-11-25 12:28:15.063277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.413 [2024-11-25 12:28:15.065697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.413 [2024-11-25 12:28:15.065723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.413 [2024-11-25 12:28:15.065728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.413 [2024-11-25 12:28:15.067984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.413 [2024-11-25 12:28:15.068010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.414 [2024-11-25 12:28:15.068016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.414 [2024-11-25 12:28:15.069931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.414 [2024-11-25 12:28:15.069957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.414 [2024-11-25 12:28:15.069962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.414 [2024-11-25 12:28:15.072391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.414 [2024-11-25 12:28:15.072417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.414 [2024-11-25 12:28:15.072423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.414 [2024-11-25 12:28:15.074605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.414 [2024-11-25 12:28:15.074631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.414 [2024-11-25 12:28:15.074637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.414 [2024-11-25 12:28:15.076905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.414 [2024-11-25 12:28:15.076931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.414 [2024-11-25 12:28:15.076936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.414 [2024-11-25 12:28:15.079056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.414 [2024-11-25 12:28:15.079082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.414 [2024-11-25 12:28:15.079087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.414 [2024-11-25 12:28:15.081299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.414 [2024-11-25 12:28:15.081325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.414 [2024-11-25 12:28:15.081331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.414 [2024-11-25 12:28:15.083478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.414 [2024-11-25 12:28:15.083504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.414 [2024-11-25 12:28:15.083510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.414 [2024-11-25 12:28:15.085785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.414 [2024-11-25 12:28:15.085809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.414 [2024-11-25 12:28:15.085815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.414 [2024-11-25 12:28:15.088001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.414 [2024-11-25 12:28:15.088027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.414 [2024-11-25 12:28:15.088033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.414 [2024-11-25 12:28:15.090496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.414 [2024-11-25 12:28:15.090522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.414 [2024-11-25 12:28:15.090529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.414 [2024-11-25 12:28:15.092591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.414 [2024-11-25 12:28:15.092616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.414 [2024-11-25 12:28:15.092622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.414 [2024-11-25 12:28:15.095229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.414 [2024-11-25 12:28:15.095255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.414 [2024-11-25 12:28:15.095261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.414 [2024-11-25 12:28:15.098234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.414 [2024-11-25 12:28:15.098259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.414 [2024-11-25 12:28:15.098265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.414 [2024-11-25 12:28:15.101217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.414 [2024-11-25 12:28:15.101243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.414 [2024-11-25 12:28:15.101248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.414 [2024-11-25 12:28:15.104231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.414 [2024-11-25 12:28:15.104256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.414 [2024-11-25 12:28:15.104262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.414 [2024-11-25 12:28:15.107145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.414 [2024-11-25 12:28:15.107171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.414 [2024-11-25 12:28:15.107177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.414 [2024-11-25 12:28:15.110120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.414 [2024-11-25 12:28:15.110159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.414 [2024-11-25 12:28:15.110166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.414 [2024-11-25 12:28:15.113124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.414 [2024-11-25 12:28:15.113151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.414 [2024-11-25 12:28:15.113156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.414 [2024-11-25 12:28:15.116118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.414 [2024-11-25 12:28:15.116143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.414 [2024-11-25 12:28:15.116149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.414 [2024-11-25 12:28:15.119113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.414 [2024-11-25 12:28:15.119140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.414 [2024-11-25 12:28:15.119146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.414 [2024-11-25 12:28:15.122117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.414 [2024-11-25 12:28:15.122155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.414 [2024-11-25 12:28:15.122162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.414 [2024-11-25 12:28:15.125122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.414 [2024-11-25 12:28:15.125148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.414 [2024-11-25 12:28:15.125154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.414 [2024-11-25 12:28:15.128148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.414 [2024-11-25 12:28:15.128173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.414 [2024-11-25 12:28:15.128179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.414 [2024-11-25 12:28:15.131154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.414 [2024-11-25 12:28:15.131180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.414 [2024-11-25 12:28:15.131186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.414 [2024-11-25 12:28:15.134191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.414 [2024-11-25 12:28:15.134216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.414 [2024-11-25 12:28:15.134222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.414 [2024-11-25 12:28:15.137133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.414 [2024-11-25 12:28:15.137160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.414 [2024-11-25 12:28:15.137167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.415 [2024-11-25 12:28:15.140150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.415 [2024-11-25 12:28:15.140176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.415 [2024-11-25 12:28:15.140181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.415 [2024-11-25 12:28:15.143146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.415 [2024-11-25 12:28:15.143172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.415 [2024-11-25 12:28:15.143178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.415 [2024-11-25 12:28:15.146194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.415 [2024-11-25 12:28:15.146219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.415 [2024-11-25 12:28:15.146225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.415 [2024-11-25 12:28:15.149321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.415 [2024-11-25 12:28:15.149348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.415 [2024-11-25 12:28:15.149354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.415 [2024-11-25 12:28:15.152427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.415 [2024-11-25 12:28:15.152452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.415 [2024-11-25 12:28:15.152458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.415 [2024-11-25 12:28:15.155501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.415 [2024-11-25 12:28:15.155528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.415 [2024-11-25 12:28:15.155534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.415 [2024-11-25 12:28:15.158621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.415 [2024-11-25 12:28:15.158648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.415 [2024-11-25 12:28:15.158654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.415 [2024-11-25 12:28:15.161615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.415 [2024-11-25 12:28:15.161641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.415 [2024-11-25 12:28:15.161647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.415 [2024-11-25 12:28:15.164614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.415 [2024-11-25 12:28:15.164641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.415 [2024-11-25 12:28:15.164647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.415 [2024-11-25 12:28:15.167557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.415 [2024-11-25 12:28:15.167583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.415 [2024-11-25 12:28:15.167588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.415 [2024-11-25 12:28:15.170574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.415 [2024-11-25 12:28:15.170601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.415 [2024-11-25 12:28:15.170606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.415 [2024-11-25 12:28:15.173495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.415 [2024-11-25 12:28:15.173521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.415 [2024-11-25 12:28:15.173526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.415 [2024-11-25 12:28:15.176515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.415 [2024-11-25 12:28:15.176539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.415 [2024-11-25 12:28:15.176545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.415 [2024-11-25 12:28:15.179477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.415 [2024-11-25 12:28:15.179503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.415 [2024-11-25 12:28:15.179509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.415 [2024-11-25 12:28:15.182487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.415 [2024-11-25 12:28:15.182512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.415 [2024-11-25 12:28:15.182518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.415 [2024-11-25 12:28:15.185533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.415 [2024-11-25 12:28:15.185559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.415 [2024-11-25 12:28:15.185565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.415 [2024-11-25 12:28:15.188431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.415 [2024-11-25 12:28:15.188456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.415 [2024-11-25 12:28:15.188462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.415 [2024-11-25 12:28:15.191388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.415 [2024-11-25 12:28:15.191414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.415 [2024-11-25 12:28:15.191421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.415 [2024-11-25 12:28:15.194365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.415 [2024-11-25 12:28:15.194391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.415 [2024-11-25 12:28:15.194397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.415 [2024-11-25 12:28:15.197331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.415 [2024-11-25 12:28:15.197356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.415 [2024-11-25 12:28:15.197363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.415 [2024-11-25 12:28:15.200314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.415 [2024-11-25 12:28:15.200339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.415 [2024-11-25 12:28:15.200345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.415 [2024-11-25 12:28:15.203154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.415 [2024-11-25 12:28:15.203180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.415 [2024-11-25 12:28:15.203185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.415 [2024-11-25 12:28:15.206114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.415 [2024-11-25 12:28:15.206155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.415 [2024-11-25 12:28:15.206161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.415 [2024-11-25 12:28:15.209036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.415 [2024-11-25 12:28:15.209062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.415 [2024-11-25 12:28:15.209068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.415 [2024-11-25 12:28:15.211165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.415 [2024-11-25 12:28:15.211193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.415 [2024-11-25 12:28:15.211198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.415 [2024-11-25 12:28:15.213887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.415 [2024-11-25 12:28:15.213914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.415 [2024-11-25 12:28:15.213921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.415 [2024-11-25 12:28:15.216901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.415 [2024-11-25 12:28:15.216925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.416 [2024-11-25 12:28:15.216931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.416 [2024-11-25 12:28:15.219050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.416 [2024-11-25 12:28:15.219074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.416 [2024-11-25 12:28:15.219080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.416 [2024-11-25 12:28:15.221617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.416 [2024-11-25 12:28:15.221644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.416 [2024-11-25 12:28:15.221649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.416 [2024-11-25 12:28:15.224637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.416 [2024-11-25 12:28:15.224663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.416 [2024-11-25 12:28:15.224669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.416 [2024-11-25 12:28:15.226880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.416 [2024-11-25 12:28:15.226904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.416 [2024-11-25 12:28:15.226910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.416 [2024-11-25 12:28:15.229579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.416 [2024-11-25 12:28:15.229606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.416 [2024-11-25 12:28:15.229612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.416 [2024-11-25 12:28:15.232724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.416 [2024-11-25 12:28:15.232760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.416 [2024-11-25 12:28:15.232766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.416 [2024-11-25 12:28:15.235897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.416 [2024-11-25 12:28:15.235922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.416 [2024-11-25 12:28:15.235929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.416 [2024-11-25 12:28:15.239113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.416 [2024-11-25 12:28:15.239139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.416 [2024-11-25 12:28:15.239145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.416 [2024-11-25 12:28:15.242155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.416 [2024-11-25 12:28:15.242181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.416 [2024-11-25 12:28:15.242186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.416 [2024-11-25 12:28:15.245338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.416 [2024-11-25 12:28:15.245365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.416 [2024-11-25 12:28:15.245370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.416 [2024-11-25 12:28:15.248678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.416 [2024-11-25 12:28:15.248708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.416 [2024-11-25 12:28:15.248715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.416 [2024-11-25 12:28:15.251848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.416 [2024-11-25 12:28:15.251875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.416 [2024-11-25 12:28:15.251882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.416 [2024-11-25 12:28:15.254976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.416 [2024-11-25 12:28:15.255003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.416 [2024-11-25 12:28:15.255010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.416 [2024-11-25 12:28:15.258036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.416 [2024-11-25 12:28:15.258063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.416 [2024-11-25 12:28:15.258069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.416 [2024-11-25 12:28:15.261142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.416 [2024-11-25 12:28:15.261168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.416 [2024-11-25 12:28:15.261173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.416 [2024-11-25 12:28:15.264140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.416 [2024-11-25 12:28:15.264166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.416 [2024-11-25 12:28:15.264171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.416 [2024-11-25 12:28:15.267145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.416 [2024-11-25 12:28:15.267171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.416 [2024-11-25 12:28:15.267177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.416 [2024-11-25 12:28:15.270162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.416 [2024-11-25 12:28:15.270188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.416 [2024-11-25 12:28:15.270194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.416 [2024-11-25 12:28:15.273274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.416 [2024-11-25 12:28:15.273302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.416 [2024-11-25 12:28:15.273307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.416 [2024-11-25 12:28:15.276148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.416 [2024-11-25 12:28:15.276175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.416 [2024-11-25 12:28:15.276180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.416 [2024-11-25 12:28:15.279243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.416 [2024-11-25 12:28:15.279270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.416 [2024-11-25 12:28:15.279276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.416 [2024-11-25 12:28:15.282226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.416 [2024-11-25 12:28:15.282252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.416 [2024-11-25 12:28:15.282257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.689 [2024-11-25 12:28:15.285438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.689 [2024-11-25 12:28:15.285468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.689 [2024-11-25 12:28:15.285475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.689 11045.00 IOPS, 1380.62 MiB/s [2024-11-25T12:28:15.565Z] [2024-11-25 12:28:15.289817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.689 [2024-11-25 12:28:15.289842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.689 [2024-11-25 12:28:15.289849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.689 [2024-11-25 12:28:15.292737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.689 [2024-11-25 12:28:15.292774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.689 [2024-11-25 12:28:15.292780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.689 [2024-11-25 12:28:15.295868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.689 [2024-11-25 12:28:15.295894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.689 [2024-11-25 12:28:15.295900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.689 [2024-11-25 12:28:15.298992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.689 [2024-11-25 12:28:15.299019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.689 [2024-11-25 12:28:15.299025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.689 [2024-11-25 12:28:15.302161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.689 [2024-11-25 12:28:15.302188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.689 [2024-11-25 12:28:15.302194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.689 [2024-11-25 12:28:15.305149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.689 [2024-11-25 12:28:15.305176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.689 [2024-11-25 12:28:15.305182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.689 [2024-11-25 12:28:15.308323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.689 [2024-11-25 12:28:15.308350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.689 [2024-11-25 12:28:15.308356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.689 [2024-11-25 12:28:15.311551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.689 [2024-11-25 12:28:15.311578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.689 [2024-11-25 12:28:15.311584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.689 [2024-11-25 12:28:15.314776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.689 [2024-11-25 12:28:15.314800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.689 [2024-11-25 12:28:15.314806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.689 [2024-11-25 12:28:15.317866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.689 [2024-11-25 12:28:15.317891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.689 [2024-11-25 12:28:15.317897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.689 [2024-11-25 12:28:15.321035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.689 [2024-11-25 12:28:15.321060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.689 [2024-11-25 12:28:15.321067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.689 [2024-11-25 12:28:15.324282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.689 [2024-11-25 12:28:15.324308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.689 [2024-11-25 12:28:15.324314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.689 [2024-11-25 12:28:15.327433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.689 [2024-11-25 12:28:15.327459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.690 [2024-11-25 12:28:15.327466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.690 [2024-11-25 12:28:15.330562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.690 [2024-11-25 12:28:15.330587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.690 [2024-11-25 12:28:15.330593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.690 [2024-11-25 12:28:15.333764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.690 [2024-11-25 12:28:15.333788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.690 [2024-11-25 12:28:15.333794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.690 [2024-11-25 12:28:15.336898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.690 [2024-11-25 12:28:15.336924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.690 [2024-11-25 12:28:15.336930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.690 [2024-11-25 12:28:15.340046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.690 [2024-11-25 12:28:15.340073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.690 [2024-11-25 12:28:15.340078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.690 [2024-11-25 12:28:15.343072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.690 [2024-11-25 12:28:15.343097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.690 [2024-11-25 12:28:15.343103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.690 [2024-11-25 12:28:15.346090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.690 [2024-11-25 12:28:15.346115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.690 [2024-11-25 12:28:15.346121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.690 [2024-11-25 12:28:15.349172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.690 [2024-11-25 12:28:15.349199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.690 [2024-11-25 12:28:15.349205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.690 [2024-11-25 12:28:15.352235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.690 [2024-11-25 12:28:15.352262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.690 [2024-11-25 12:28:15.352268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.690 [2024-11-25 12:28:15.355270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.690 [2024-11-25 12:28:15.355296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.690 [2024-11-25 12:28:15.355302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.690 [2024-11-25 12:28:15.358335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.690 [2024-11-25 12:28:15.358360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.690 [2024-11-25 12:28:15.358366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.690 [2024-11-25 12:28:15.361429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.690 [2024-11-25 12:28:15.361455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.690 [2024-11-25 12:28:15.361461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.690 [2024-11-25 12:28:15.364504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.690 [2024-11-25 12:28:15.364530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.690 [2024-11-25 12:28:15.364536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.690 [2024-11-25 12:28:15.367609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.690 [2024-11-25 12:28:15.367635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.690 [2024-11-25 12:28:15.367640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.690 [2024-11-25 12:28:15.370605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.690 [2024-11-25 12:28:15.370630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.690 [2024-11-25 12:28:15.370636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.690 [2024-11-25 12:28:15.373464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.690 [2024-11-25 12:28:15.373489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.690 [2024-11-25 12:28:15.373495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.690 [2024-11-25 12:28:15.376379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.690 [2024-11-25 12:28:15.376405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.690 [2024-11-25 12:28:15.376411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.690 [2024-11-25 12:28:15.379412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.690 [2024-11-25 12:28:15.379439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.690 [2024-11-25 12:28:15.379444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.690 [2024-11-25 12:28:15.382433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.690 [2024-11-25 12:28:15.382459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.690 [2024-11-25 12:28:15.382464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.690 [2024-11-25 12:28:15.385422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.690 [2024-11-25 12:28:15.385448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.690 [2024-11-25 12:28:15.385453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.690 [2024-11-25 12:28:15.388417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.690 [2024-11-25 12:28:15.388442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.690 [2024-11-25 12:28:15.388448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.691 [2024-11-25 12:28:15.391246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.691 [2024-11-25 12:28:15.391271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.691 [2024-11-25 12:28:15.391277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.691 [2024-11-25 12:28:15.394352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.691 [2024-11-25 12:28:15.394378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.691 [2024-11-25 12:28:15.394384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.691 [2024-11-25 12:28:15.397332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.691 [2024-11-25 12:28:15.397358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.691 [2024-11-25 12:28:15.397364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.691 [2024-11-25 12:28:15.400363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.691 [2024-11-25 12:28:15.400388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.691 [2024-11-25 12:28:15.400394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.691 [2024-11-25 12:28:15.403395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.691 [2024-11-25 12:28:15.403421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.691 [2024-11-25 12:28:15.403426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.691 [2024-11-25 12:28:15.406472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.691 [2024-11-25 12:28:15.406498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.691 [2024-11-25 12:28:15.406503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.691 [2024-11-25 12:28:15.409418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.691 [2024-11-25 12:28:15.409444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.691 [2024-11-25 12:28:15.409449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.691 [2024-11-25 12:28:15.412436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.691 [2024-11-25 12:28:15.412463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.691 [2024-11-25 12:28:15.412468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.691 [2024-11-25 12:28:15.415401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.691 [2024-11-25 12:28:15.415428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.691 [2024-11-25 12:28:15.415433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.691 [2024-11-25 12:28:15.418396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.691 [2024-11-25 12:28:15.418423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.691 [2024-11-25 12:28:15.418428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.691 [2024-11-25 12:28:15.421390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.691 [2024-11-25 12:28:15.421416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.691 [2024-11-25 12:28:15.421422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.691 [2024-11-25 12:28:15.424384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.691 [2024-11-25 12:28:15.424410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.691 [2024-11-25 12:28:15.424416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.691 [2024-11-25 12:28:15.427427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.691 [2024-11-25 12:28:15.427454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.691 [2024-11-25 12:28:15.427460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.691 [2024-11-25 12:28:15.430433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.691 [2024-11-25 12:28:15.430459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.691 [2024-11-25 12:28:15.430465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.691 [2024-11-25 12:28:15.433230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.691 [2024-11-25 12:28:15.433255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.691 [2024-11-25 12:28:15.433261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.691 [2024-11-25 12:28:15.436235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.691 [2024-11-25 12:28:15.436261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.691 [2024-11-25 12:28:15.436267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.691 [2024-11-25 12:28:15.439261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.691 [2024-11-25 12:28:15.439286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.691 [2024-11-25 12:28:15.439292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.691 [2024-11-25 12:28:15.442301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.691 [2024-11-25 12:28:15.442326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.691 [2024-11-25 12:28:15.442332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.691 [2024-11-25 12:28:15.445314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.691 [2024-11-25 12:28:15.445340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.691 [2024-11-25 12:28:15.445346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.691 [2024-11-25 12:28:15.448371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.691 [2024-11-25 12:28:15.448397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.691 [2024-11-25 12:28:15.448402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.691 [2024-11-25 12:28:15.451380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.691 [2024-11-25 12:28:15.451406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.691 [2024-11-25 12:28:15.451412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.691 [2024-11-25 12:28:15.454449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.692 [2024-11-25 12:28:15.454475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.692 [2024-11-25 12:28:15.454481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.692 [2024-11-25 12:28:15.457654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.692 [2024-11-25 12:28:15.457680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.692 [2024-11-25 12:28:15.457686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.692 [2024-11-25 12:28:15.460773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.692 [2024-11-25 12:28:15.460798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.692 [2024-11-25 12:28:15.460804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.692 [2024-11-25 12:28:15.463742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.692 [2024-11-25 12:28:15.463778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.692 [2024-11-25 12:28:15.463784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.692 [2024-11-25 12:28:15.466964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.692 [2024-11-25 12:28:15.466991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.692 [2024-11-25 12:28:15.466996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.692 [2024-11-25 12:28:15.470034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.692 [2024-11-25 12:28:15.470062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.692 [2024-11-25 12:28:15.470068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.692 [2024-11-25 12:28:15.473072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.692 [2024-11-25 12:28:15.473098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.692 [2024-11-25 12:28:15.473104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.692 [2024-11-25 12:28:15.476102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.692 [2024-11-25 12:28:15.476128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.692 [2024-11-25 12:28:15.476134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.692 [2024-11-25 12:28:15.479083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.692 [2024-11-25 12:28:15.479110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.692 [2024-11-25 12:28:15.479115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.692 [2024-11-25 12:28:15.482181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.692 [2024-11-25 12:28:15.482207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.692 [2024-11-25 12:28:15.482213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.692 [2024-11-25 12:28:15.485193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.692 [2024-11-25 12:28:15.485219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.692 [2024-11-25 12:28:15.485225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.692 [2024-11-25 12:28:15.488138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.692 [2024-11-25 12:28:15.488164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.692 [2024-11-25 12:28:15.488169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.692 [2024-11-25 12:28:15.491201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.692 [2024-11-25 12:28:15.491227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.692 [2024-11-25 12:28:15.491234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.692 [2024-11-25 12:28:15.494235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.692 [2024-11-25 12:28:15.494261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.692 [2024-11-25 12:28:15.494267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.692 [2024-11-25 12:28:15.497238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.692 [2024-11-25 12:28:15.497264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.692 [2024-11-25 12:28:15.497269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.692 [2024-11-25 12:28:15.500262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.692 [2024-11-25 12:28:15.500288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.692 [2024-11-25 12:28:15.500294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.692 [2024-11-25 12:28:15.503146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.692 [2024-11-25 12:28:15.503171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.692 [2024-11-25 12:28:15.503177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.692 [2024-11-25 12:28:15.506101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.692 [2024-11-25 12:28:15.506127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.692 [2024-11-25 12:28:15.506133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.692 [2024-11-25 12:28:15.509186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.692 [2024-11-25 12:28:15.509212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.692 [2024-11-25 12:28:15.509217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.692 [2024-11-25 12:28:15.512174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.692 [2024-11-25 12:28:15.512200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.692 [2024-11-25 12:28:15.512206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.692 [2024-11-25 12:28:15.515149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.693 [2024-11-25 12:28:15.515175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.693 [2024-11-25 12:28:15.515181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.693 [2024-11-25 12:28:15.518116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.693 [2024-11-25 12:28:15.518154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.693 [2024-11-25 12:28:15.518161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.693 [2024-11-25 12:28:15.521158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.693 [2024-11-25 12:28:15.521184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.693 [2024-11-25 12:28:15.521190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.693 [2024-11-25 12:28:15.524216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.693 [2024-11-25 12:28:15.524242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.693 [2024-11-25 12:28:15.524248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.693 [2024-11-25 12:28:15.527241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.693 [2024-11-25 12:28:15.527267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.693 [2024-11-25 12:28:15.527273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.693 [2024-11-25 12:28:15.530259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.693 [2024-11-25 12:28:15.530284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.693 [2024-11-25 12:28:15.530290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.693 [2024-11-25 12:28:15.533255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.693 [2024-11-25 12:28:15.533281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.693 [2024-11-25 12:28:15.533287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.693 [2024-11-25 12:28:15.536269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.693 [2024-11-25 12:28:15.536295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.693 [2024-11-25 12:28:15.536301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.693 [2024-11-25 12:28:15.539258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.693 [2024-11-25 12:28:15.539284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.693 [2024-11-25 12:28:15.539290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.693 [2024-11-25 12:28:15.542327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.693 [2024-11-25 12:28:15.542354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.693 [2024-11-25 12:28:15.542360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.693 [2024-11-25 12:28:15.545265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.693 [2024-11-25 12:28:15.545291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.693 [2024-11-25 12:28:15.545297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.693 [2024-11-25 12:28:15.548270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.693 [2024-11-25 12:28:15.548296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.693 [2024-11-25 12:28:15.548302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.693 [2024-11-25 12:28:15.551278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.693 [2024-11-25 12:28:15.551303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.693 [2024-11-25 12:28:15.551308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.693 [2024-11-25 12:28:15.554393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.693 [2024-11-25 12:28:15.554419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.693 [2024-11-25 12:28:15.554425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.693 [2024-11-25 12:28:15.557464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.693 [2024-11-25 12:28:15.557490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.693 [2024-11-25 12:28:15.557497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.693 [2024-11-25 12:28:15.560660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.693 [2024-11-25 12:28:15.560689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.693 [2024-11-25 12:28:15.560696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.955 [2024-11-25 12:28:15.563910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.955 [2024-11-25 12:28:15.563939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.955 [2024-11-25 12:28:15.563945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.955 [2024-11-25 12:28:15.566944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.955 [2024-11-25 12:28:15.566970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.955 [2024-11-25 12:28:15.566975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.955 [2024-11-25 12:28:15.569930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.955 [2024-11-25 12:28:15.569959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.955 [2024-11-25 12:28:15.569965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.955 [2024-11-25 12:28:15.572946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.955 [2024-11-25 12:28:15.572972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.955 [2024-11-25 12:28:15.572978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.955 [2024-11-25 12:28:15.575940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.955 [2024-11-25 12:28:15.575965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.955 [2024-11-25 12:28:15.575971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.955 [2024-11-25 12:28:15.578955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.955 [2024-11-25 12:28:15.578981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.955 [2024-11-25 12:28:15.578987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.955 [2024-11-25 12:28:15.581859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.955 [2024-11-25 12:28:15.581884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.955 [2024-11-25 12:28:15.581890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.955 [2024-11-25 12:28:15.584837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.955 [2024-11-25 12:28:15.584862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.955 [2024-11-25 12:28:15.584867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.955 [2024-11-25 12:28:15.587826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.955 [2024-11-25 12:28:15.587849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.955 [2024-11-25 12:28:15.587855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.955 [2024-11-25 12:28:15.590875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.955 [2024-11-25 12:28:15.590900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.955 [2024-11-25 12:28:15.590906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.955 [2024-11-25 12:28:15.593905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.955 [2024-11-25 12:28:15.593929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.955 [2024-11-25 12:28:15.593935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.955 [2024-11-25 12:28:15.596924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.955 [2024-11-25 12:28:15.596949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.955 [2024-11-25 12:28:15.596955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.955 [2024-11-25 12:28:15.599916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.955 [2024-11-25 12:28:15.599941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.955 [2024-11-25 12:28:15.599947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.955 [2024-11-25 12:28:15.602903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.955 [2024-11-25 12:28:15.602929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.955 [2024-11-25 12:28:15.602935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.955 [2024-11-25 12:28:15.605922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.955 [2024-11-25 12:28:15.605947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.955 [2024-11-25 12:28:15.605952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.955 [2024-11-25 12:28:15.608815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.955 [2024-11-25 12:28:15.608839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.955 [2024-11-25 12:28:15.608845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.955 [2024-11-25 12:28:15.611805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.955 [2024-11-25 12:28:15.611829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.955 [2024-11-25 12:28:15.611835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.955 [2024-11-25 12:28:15.614869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.955 [2024-11-25 12:28:15.614894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.955 [2024-11-25 12:28:15.614900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.956 [2024-11-25 12:28:15.617864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.956 [2024-11-25 12:28:15.617888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.956 [2024-11-25 12:28:15.617894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.956 [2024-11-25 12:28:15.620911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.956 [2024-11-25 12:28:15.620937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.956 [2024-11-25 12:28:15.620943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.956 [2024-11-25 12:28:15.624003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.956 [2024-11-25 12:28:15.624029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.956 [2024-11-25 12:28:15.624034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.956 [2024-11-25 12:28:15.627085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.956 [2024-11-25 12:28:15.627111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.956 [2024-11-25 12:28:15.627117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.956 [2024-11-25 12:28:15.630121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.956 [2024-11-25 12:28:15.630153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.956 [2024-11-25 12:28:15.630159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.956 [2024-11-25 12:28:15.633121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.956 [2024-11-25 12:28:15.633147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.956 [2024-11-25 12:28:15.633153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.956 [2024-11-25 12:28:15.636035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.956 [2024-11-25 12:28:15.636060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.956 [2024-11-25 12:28:15.636066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.956 [2024-11-25 12:28:15.639053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.956 [2024-11-25 12:28:15.639079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.956 [2024-11-25 12:28:15.639084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.956 [2024-11-25 12:28:15.642090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.956 [2024-11-25 12:28:15.642114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.956 [2024-11-25 12:28:15.642120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.956 [2024-11-25 12:28:15.645123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.956 [2024-11-25 12:28:15.645149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.956 [2024-11-25 12:28:15.645155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.956 [2024-11-25 12:28:15.648156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.956 [2024-11-25 12:28:15.648181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.956 [2024-11-25 12:28:15.648187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.956 [2024-11-25 12:28:15.651060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.956 [2024-11-25 12:28:15.651085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.956 [2024-11-25 12:28:15.651092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.956 [2024-11-25 12:28:15.654063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.956 [2024-11-25 12:28:15.654088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.956 [2024-11-25 12:28:15.654095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.956 [2024-11-25 12:28:15.657028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.956 [2024-11-25 12:28:15.657054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.956 [2024-11-25 12:28:15.657059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.956 [2024-11-25 12:28:15.659807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.956 [2024-11-25 12:28:15.659831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.956 [2024-11-25 12:28:15.659836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.956 [2024-11-25 12:28:15.662794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.956 [2024-11-25 12:28:15.662818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.956 [2024-11-25 12:28:15.662823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.956 [2024-11-25 12:28:15.665783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.956 [2024-11-25 12:28:15.665807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.956 [2024-11-25 12:28:15.665813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.956 [2024-11-25 12:28:15.668643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.956 [2024-11-25 12:28:15.668670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.956 [2024-11-25 12:28:15.668676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.956 [2024-11-25 12:28:15.671765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.956 [2024-11-25 12:28:15.671789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.956 [2024-11-25 12:28:15.671794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.956 [2024-11-25 12:28:15.674867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.956 [2024-11-25 12:28:15.674894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.956 [2024-11-25 12:28:15.674900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.956 [2024-11-25 12:28:15.677841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.956 [2024-11-25 12:28:15.677865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.956 [2024-11-25 12:28:15.677870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.956 [2024-11-25 12:28:15.680518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.956 [2024-11-25 12:28:15.680543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.956 [2024-11-25 12:28:15.680549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.956 [2024-11-25 12:28:15.683493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.956 [2024-11-25 12:28:15.683519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.956 [2024-11-25 12:28:15.683525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.957 [2024-11-25 12:28:15.686476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.957 [2024-11-25 12:28:15.686502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.957 [2024-11-25 12:28:15.686508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.957 [2024-11-25 12:28:15.689268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.957 [2024-11-25 12:28:15.689293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.957 [2024-11-25 12:28:15.689299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.957 [2024-11-25 12:28:15.692149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.957 [2024-11-25 12:28:15.692174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.957 [2024-11-25 12:28:15.692180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.957 [2024-11-25 12:28:15.695232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.957 [2024-11-25 12:28:15.695257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.957 [2024-11-25 12:28:15.695263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.957 [2024-11-25 12:28:15.698266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.957 [2024-11-25 12:28:15.698291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.957 [2024-11-25 12:28:15.698297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.957 [2024-11-25 12:28:15.701207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.957 [2024-11-25 12:28:15.701232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.957 [2024-11-25 12:28:15.701238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.957 [2024-11-25 12:28:15.704201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.957 [2024-11-25 12:28:15.704227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.957 [2024-11-25 12:28:15.704232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.957 [2024-11-25 12:28:15.707156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.957 [2024-11-25 12:28:15.707181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.957 [2024-11-25 12:28:15.707187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.957 [2024-11-25 12:28:15.710164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.957 [2024-11-25 12:28:15.710191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.957 [2024-11-25 12:28:15.710197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.957 [2024-11-25 12:28:15.713268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.957 [2024-11-25 12:28:15.713294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.957 [2024-11-25 12:28:15.713299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.957 [2024-11-25 12:28:15.716299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.957 [2024-11-25 12:28:15.716325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.957 [2024-11-25 12:28:15.716331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.957 [2024-11-25 12:28:15.721234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.957 [2024-11-25 12:28:15.721264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.957 [2024-11-25 12:28:15.721270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.957 [2024-11-25 12:28:15.724290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.957 [2024-11-25 12:28:15.724318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.957 [2024-11-25 12:28:15.724324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.957 [2024-11-25 12:28:15.727294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.957 [2024-11-25 12:28:15.727321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.957 [2024-11-25 12:28:15.727328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.957 [2024-11-25 12:28:15.730320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.957 [2024-11-25 12:28:15.730346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.957 [2024-11-25 12:28:15.730352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.957 [2024-11-25 12:28:15.733357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.957 [2024-11-25 12:28:15.733384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.957 [2024-11-25 12:28:15.733390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.957 [2024-11-25 12:28:15.736408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.957 [2024-11-25 12:28:15.736436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.957 [2024-11-25 12:28:15.736442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.957 [2024-11-25 12:28:15.739460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.957 [2024-11-25 12:28:15.739485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.957 [2024-11-25 12:28:15.739492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.957 [2024-11-25 12:28:15.742491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.957 [2024-11-25 12:28:15.742517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.957 [2024-11-25 12:28:15.742523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.957 [2024-11-25 12:28:15.745474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.957 [2024-11-25 12:28:15.745500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.957 [2024-11-25 12:28:15.745506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.957 [2024-11-25 12:28:15.748503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.957 [2024-11-25 12:28:15.748529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.957 [2024-11-25 12:28:15.748535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.957 [2024-11-25 12:28:15.751535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.957 [2024-11-25 12:28:15.751561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.957 [2024-11-25 12:28:15.751567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.957 [2024-11-25 12:28:15.754578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.957 [2024-11-25 12:28:15.754604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.957 [2024-11-25 12:28:15.754610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.957 [2024-11-25 12:28:15.757531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.958 [2024-11-25 12:28:15.757557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.958 [2024-11-25 12:28:15.757563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.958 [2024-11-25 12:28:15.760690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.958 [2024-11-25 12:28:15.760722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.958 [2024-11-25 12:28:15.760728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.958 [2024-11-25 12:28:15.763797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.958 [2024-11-25 12:28:15.763822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.958 [2024-11-25 12:28:15.763827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.958 [2024-11-25 12:28:15.766793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.958 [2024-11-25 12:28:15.766818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.958 [2024-11-25 12:28:15.766824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.958 [2024-11-25 12:28:15.769901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.958 [2024-11-25 12:28:15.769926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.958 [2024-11-25 12:28:15.769931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.958 [2024-11-25 12:28:15.773022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.958 [2024-11-25 12:28:15.773051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.958 [2024-11-25 12:28:15.773057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.958 [2024-11-25 12:28:15.775985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.958 [2024-11-25 12:28:15.776012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.958 [2024-11-25 12:28:15.776018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.958 [2024-11-25 12:28:15.778967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.958 [2024-11-25 12:28:15.778993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.958 [2024-11-25 12:28:15.778999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.958 [2024-11-25 12:28:15.781957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.958 [2024-11-25 12:28:15.781981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.958 [2024-11-25 12:28:15.781986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.958 [2024-11-25 12:28:15.785157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.958 [2024-11-25 12:28:15.785183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.958 [2024-11-25 12:28:15.785188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.958 [2024-11-25 12:28:15.788257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.958 [2024-11-25 12:28:15.788279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.958 [2024-11-25 12:28:15.788285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.958 [2024-11-25 12:28:15.791333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.958 [2024-11-25 12:28:15.791355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.958 [2024-11-25 12:28:15.791361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.958 [2024-11-25 12:28:15.794469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.958 [2024-11-25 12:28:15.794496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.958 [2024-11-25 12:28:15.794502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.958 [2024-11-25 12:28:15.797554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.958 [2024-11-25 12:28:15.797580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.958 [2024-11-25 12:28:15.797586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.958 [2024-11-25 12:28:15.800649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.958 [2024-11-25 12:28:15.800676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.958 [2024-11-25 12:28:15.800681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.958 [2024-11-25 12:28:15.803719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.958 [2024-11-25 12:28:15.803746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.958 [2024-11-25 12:28:15.803762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.958 [2024-11-25 12:28:15.806726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.958 [2024-11-25 12:28:15.806760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.958 [2024-11-25 12:28:15.806767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.958 [2024-11-25 12:28:15.809736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.958 [2024-11-25 12:28:15.809769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.958 [2024-11-25 12:28:15.809774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.958 [2024-11-25 12:28:15.812788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.958 [2024-11-25 12:28:15.812814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.958 [2024-11-25 12:28:15.812820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.958 [2024-11-25 12:28:15.815885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.958 [2024-11-25 12:28:15.815910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.958 [2024-11-25 12:28:15.815916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.958 [2024-11-25 12:28:15.818947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.958 [2024-11-25 12:28:15.818973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.958 [2024-11-25 12:28:15.818979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.958 [2024-11-25 12:28:15.821927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.958 [2024-11-25 12:28:15.821954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.958 [2024-11-25 12:28:15.821960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.958 [2024-11-25 12:28:15.825097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:19.958 [2024-11-25 12:28:15.825127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.959 [2024-11-25 12:28:15.825133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.217 [2024-11-25 12:28:15.828156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.218 [2024-11-25 12:28:15.828183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.218 [2024-11-25 12:28:15.828190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.218 [2024-11-25 12:28:15.831205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.218 [2024-11-25 12:28:15.831235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.218 [2024-11-25 12:28:15.831241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.218 [2024-11-25 12:28:15.834428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.218 [2024-11-25 12:28:15.834456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.218 [2024-11-25 12:28:15.834462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.218 [2024-11-25 12:28:15.837641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.218 [2024-11-25 12:28:15.837671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.218 [2024-11-25 12:28:15.837677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.218 [2024-11-25 12:28:15.840869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.218 [2024-11-25 12:28:15.840895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.218 [2024-11-25 12:28:15.840901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.218 [2024-11-25 12:28:15.843974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.218 [2024-11-25 12:28:15.844000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.218 [2024-11-25 12:28:15.844006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.218 [2024-11-25 12:28:15.846972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.218 [2024-11-25 12:28:15.846998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.218 [2024-11-25 12:28:15.847004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.218 [2024-11-25 12:28:15.849956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.218 [2024-11-25 12:28:15.849981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.218 [2024-11-25 12:28:15.849987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.218 [2024-11-25 12:28:15.853056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.218 [2024-11-25 12:28:15.853082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.218 [2024-11-25 12:28:15.853088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.218 [2024-11-25 12:28:15.856159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.218 [2024-11-25 12:28:15.856186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.218 [2024-11-25 12:28:15.856192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.218 [2024-11-25 12:28:15.859302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.218 [2024-11-25 12:28:15.859330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.218 [2024-11-25 12:28:15.859336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.218 [2024-11-25 12:28:15.862408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.218 [2024-11-25 12:28:15.862434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.218 [2024-11-25 12:28:15.862441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.218 [2024-11-25 12:28:15.865511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.218 [2024-11-25 12:28:15.865536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.218 [2024-11-25 12:28:15.865542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.218 [2024-11-25 12:28:15.868598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.218 [2024-11-25 12:28:15.868624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.218 [2024-11-25 12:28:15.868629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.218 [2024-11-25 12:28:15.871590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.218 [2024-11-25 12:28:15.871616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.218 [2024-11-25 12:28:15.871621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.218 [2024-11-25 12:28:15.874630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.218 [2024-11-25 12:28:15.874656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.218 [2024-11-25 12:28:15.874662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.218 [2024-11-25 12:28:15.877719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.218 [2024-11-25 12:28:15.877745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.218 [2024-11-25 12:28:15.877762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.218 [2024-11-25 12:28:15.880831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.218 [2024-11-25 12:28:15.880856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.218 [2024-11-25 12:28:15.880862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.218 [2024-11-25 12:28:15.883814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.218 [2024-11-25 12:28:15.883837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.218 [2024-11-25 12:28:15.883843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.218 [2024-11-25 12:28:15.886801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.218 [2024-11-25 12:28:15.886824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.218 [2024-11-25 12:28:15.886830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.218 [2024-11-25 12:28:15.889808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.218 [2024-11-25 12:28:15.889832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.218 [2024-11-25 12:28:15.889838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.218 [2024-11-25 12:28:15.892814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.218 [2024-11-25 12:28:15.892837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.218 [2024-11-25 12:28:15.892843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.218 [2024-11-25 12:28:15.895846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.219 [2024-11-25 12:28:15.895870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.219 [2024-11-25 12:28:15.895876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.219 [2024-11-25 12:28:15.898855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.219 [2024-11-25 12:28:15.898879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.219 [2024-11-25 12:28:15.898885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.219 [2024-11-25 12:28:15.901842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.219 [2024-11-25 12:28:15.901867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.219 [2024-11-25 12:28:15.901873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.219 [2024-11-25 12:28:15.904856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.219 [2024-11-25 12:28:15.904880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.219 [2024-11-25 12:28:15.904886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.219 [2024-11-25 12:28:15.907851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.219 [2024-11-25 12:28:15.907875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.219 [2024-11-25 12:28:15.907881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.219 [2024-11-25 12:28:15.910774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.219 [2024-11-25 12:28:15.910797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.219 [2024-11-25 12:28:15.910803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.219 [2024-11-25 12:28:15.913764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.219 [2024-11-25 12:28:15.913788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.219 [2024-11-25 12:28:15.913794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.219 [2024-11-25 12:28:15.916730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.219 [2024-11-25 12:28:15.916767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.219 [2024-11-25 12:28:15.916773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.219 [2024-11-25 12:28:15.919704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.219 [2024-11-25 12:28:15.919730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.219 [2024-11-25 12:28:15.919735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.219 [2024-11-25 12:28:15.922653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.219 [2024-11-25 12:28:15.922678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.219 [2024-11-25 12:28:15.922684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.219 [2024-11-25 12:28:15.925691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.219 [2024-11-25 12:28:15.925851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.219 [2024-11-25 12:28:15.925859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.219 [2024-11-25 12:28:15.928866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.219 [2024-11-25 12:28:15.928892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.219 [2024-11-25 12:28:15.928898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.219 [2024-11-25 12:28:15.931905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.219 [2024-11-25 12:28:15.931931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.219 [2024-11-25 12:28:15.931937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.219 [2024-11-25 12:28:15.934923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.219 [2024-11-25 12:28:15.934949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.219 [2024-11-25 12:28:15.934955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.219 [2024-11-25 12:28:15.937936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.219 [2024-11-25 12:28:15.937961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.219 [2024-11-25 12:28:15.937967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.219 [2024-11-25 12:28:15.940975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.219 [2024-11-25 12:28:15.941002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.219 [2024-11-25 12:28:15.941008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.219 [2024-11-25 12:28:15.944050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.219 [2024-11-25 12:28:15.944075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.219 [2024-11-25 12:28:15.944081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.219 [2024-11-25 12:28:15.946952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.219 [2024-11-25 12:28:15.946978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.219 [2024-11-25 12:28:15.946984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.219 [2024-11-25 12:28:15.949969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.219 [2024-11-25 12:28:15.949996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.219 [2024-11-25 12:28:15.950002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.219 [2024-11-25 12:28:15.953054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.219 [2024-11-25 12:28:15.953080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.219 [2024-11-25 12:28:15.953086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.219 [2024-11-25 12:28:15.956091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.219 [2024-11-25 12:28:15.956116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.219 [2024-11-25 12:28:15.956122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.219 [2024-11-25 12:28:15.959111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.219 [2024-11-25 12:28:15.959137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.219 [2024-11-25 12:28:15.959142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.220 [2024-11-25 12:28:15.962025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.220 [2024-11-25 12:28:15.962051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.220 [2024-11-25 12:28:15.962057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.220 [2024-11-25 12:28:15.965016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.220 [2024-11-25 12:28:15.965042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.220 [2024-11-25 12:28:15.965047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.220 [2024-11-25 12:28:15.968010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.220 [2024-11-25 12:28:15.968036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.220 [2024-11-25 12:28:15.968042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.220 [2024-11-25 12:28:15.971015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.220 [2024-11-25 12:28:15.971040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.220 [2024-11-25 12:28:15.971046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.220 [2024-11-25 12:28:15.973982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.220 [2024-11-25 12:28:15.974007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.220 [2024-11-25 12:28:15.974013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.220 [2024-11-25 12:28:15.977013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.220 [2024-11-25 12:28:15.977038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.220 [2024-11-25 12:28:15.977045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.220 [2024-11-25 12:28:15.980072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.220 [2024-11-25 12:28:15.980098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.220 [2024-11-25 12:28:15.980104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.220 [2024-11-25 12:28:15.983090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.220 [2024-11-25 12:28:15.983115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.220 [2024-11-25 12:28:15.983121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.220 [2024-11-25 12:28:15.986175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.220 [2024-11-25 12:28:15.986201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.220 [2024-11-25 12:28:15.986207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.220 [2024-11-25 12:28:15.989180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.220 [2024-11-25 12:28:15.989207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.220 [2024-11-25 12:28:15.989212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.220 [2024-11-25 12:28:15.992273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.220 [2024-11-25 12:28:15.992300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.220 [2024-11-25 12:28:15.992306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.220 [2024-11-25 12:28:15.995269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.220 [2024-11-25 12:28:15.995295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.220 [2024-11-25 12:28:15.995301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.220 [2024-11-25 12:28:15.998267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.220 [2024-11-25 12:28:15.998294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.220 [2024-11-25 12:28:15.998300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.220 [2024-11-25 12:28:16.001302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.220 [2024-11-25 12:28:16.001328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.220 [2024-11-25 12:28:16.001333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.220 [2024-11-25 12:28:16.004321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.220 [2024-11-25 12:28:16.004347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.220 [2024-11-25 12:28:16.004353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.220 [2024-11-25 12:28:16.007260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.220 [2024-11-25 12:28:16.007286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.220 [2024-11-25 12:28:16.007292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.220 [2024-11-25 12:28:16.010239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.220 [2024-11-25 12:28:16.010266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.220 [2024-11-25 12:28:16.010272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.220 [2024-11-25 12:28:16.013205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.220 [2024-11-25 12:28:16.013231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.220 [2024-11-25 12:28:16.013237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.220 [2024-11-25 12:28:16.016224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.220 [2024-11-25 12:28:16.016250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.220 [2024-11-25 12:28:16.016255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.220 [2024-11-25 12:28:16.019270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.220 [2024-11-25 12:28:16.019297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.220 [2024-11-25 12:28:16.019303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.221 [2024-11-25 12:28:16.022300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.221 [2024-11-25 12:28:16.022326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.221 [2024-11-25 12:28:16.022332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.221 [2024-11-25 12:28:16.025309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.221 [2024-11-25 12:28:16.025336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.221 [2024-11-25 12:28:16.025342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.221 [2024-11-25 12:28:16.028359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.221 [2024-11-25 12:28:16.028386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.221 [2024-11-25 12:28:16.028392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.221 [2024-11-25 12:28:16.031260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.221 [2024-11-25 12:28:16.031286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.221 [2024-11-25 12:28:16.031292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.221 [2024-11-25 12:28:16.034259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.221 [2024-11-25 12:28:16.034285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.221 [2024-11-25 12:28:16.034291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.221 [2024-11-25 12:28:16.037199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.221 [2024-11-25 12:28:16.037225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.221 [2024-11-25 12:28:16.037231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.221 [2024-11-25 12:28:16.040249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.221 [2024-11-25 12:28:16.040275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.221 [2024-11-25 12:28:16.040281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.221 [2024-11-25 12:28:16.043253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.221 [2024-11-25 12:28:16.043278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.221 [2024-11-25 12:28:16.043284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.221 [2024-11-25 12:28:16.046281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.221 [2024-11-25 12:28:16.046307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.221 [2024-11-25 12:28:16.046313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.221 [2024-11-25 12:28:16.049262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.221 [2024-11-25 12:28:16.049288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.221 [2024-11-25 12:28:16.049294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.221 [2024-11-25 12:28:16.052300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.221 [2024-11-25 12:28:16.052325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.221 [2024-11-25 12:28:16.052330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.221 [2024-11-25 12:28:16.055308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.221 [2024-11-25 12:28:16.055333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.221 [2024-11-25 12:28:16.055339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.221 [2024-11-25 12:28:16.058307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.221 [2024-11-25 12:28:16.058335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.221 [2024-11-25 12:28:16.058341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.221 [2024-11-25 12:28:16.061157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.221 [2024-11-25 12:28:16.061294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.221 [2024-11-25 12:28:16.061302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.221 [2024-11-25 12:28:16.064279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.221 [2024-11-25 12:28:16.064307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.221 [2024-11-25 12:28:16.064313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.221 [2024-11-25 12:28:16.067328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.221 [2024-11-25 12:28:16.067355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.221 [2024-11-25 12:28:16.067362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.221 [2024-11-25 12:28:16.070399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.221 [2024-11-25 12:28:16.070427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.221 [2024-11-25 12:28:16.070433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.221 [2024-11-25 12:28:16.073401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.221 [2024-11-25 12:28:16.073427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.221 [2024-11-25 12:28:16.073432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.221 [2024-11-25 12:28:16.076413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.221 [2024-11-25 12:28:16.076519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.221 [2024-11-25 12:28:16.076527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.221 [2024-11-25 12:28:16.079462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.221 [2024-11-25 12:28:16.079488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.221 [2024-11-25 12:28:16.079494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.221 [2024-11-25 12:28:16.082547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.221 [2024-11-25 12:28:16.082574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.221 [2024-11-25 12:28:16.082580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.221 [2024-11-25 12:28:16.085637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.221 [2024-11-25 12:28:16.085663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.221 [2024-11-25 12:28:16.085670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.221 [2024-11-25 12:28:16.088784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.221 [2024-11-25 12:28:16.088811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.221 [2024-11-25 12:28:16.088821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.480 [2024-11-25 12:28:16.092155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.480 [2024-11-25 12:28:16.092185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.480 [2024-11-25 12:28:16.092191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.480 [2024-11-25 12:28:16.095466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.480 [2024-11-25 12:28:16.095494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.480 [2024-11-25 12:28:16.095500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.480 [2024-11-25 12:28:16.098656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.480 [2024-11-25 12:28:16.098686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.480 [2024-11-25 12:28:16.098693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.480 [2024-11-25 12:28:16.101825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.480 [2024-11-25 12:28:16.101851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.480 [2024-11-25 12:28:16.101857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.480 [2024-11-25 12:28:16.104977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.480 [2024-11-25 12:28:16.105004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.480 [2024-11-25 12:28:16.105010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.480 [2024-11-25 12:28:16.108064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.480 [2024-11-25 12:28:16.108091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.480 [2024-11-25 12:28:16.108097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.480 [2024-11-25 12:28:16.111128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.480 [2024-11-25 12:28:16.111155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.480 [2024-11-25 12:28:16.111161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.480 [2024-11-25 12:28:16.114192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.480 [2024-11-25 12:28:16.114220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.480 [2024-11-25 12:28:16.114226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.480 [2024-11-25 12:28:16.117206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.480 [2024-11-25 12:28:16.117232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.480 [2024-11-25 12:28:16.117238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.480 [2024-11-25 12:28:16.120152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.480 [2024-11-25 12:28:16.120178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.480 [2024-11-25 12:28:16.120184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.480 [2024-11-25 12:28:16.123093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.480 [2024-11-25 12:28:16.123119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.480 [2024-11-25 12:28:16.123125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.480 [2024-11-25 12:28:16.126010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.480 [2024-11-25 12:28:16.126034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.480 [2024-11-25 12:28:16.126041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.480 [2024-11-25 12:28:16.129169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.480 [2024-11-25 12:28:16.129194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.480 [2024-11-25 12:28:16.129200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.480 [2024-11-25 12:28:16.132247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.480 [2024-11-25 12:28:16.132276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.480 [2024-11-25 12:28:16.132281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.480 [2024-11-25 12:28:16.135240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.480 [2024-11-25 12:28:16.135266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.480 [2024-11-25 12:28:16.135272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.481 [2024-11-25 12:28:16.138234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.481 [2024-11-25 12:28:16.138259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.481 [2024-11-25 12:28:16.138265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.481 [2024-11-25 12:28:16.141281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.481 [2024-11-25 12:28:16.141306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.481 [2024-11-25 12:28:16.141312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.481 [2024-11-25 12:28:16.144311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.481 [2024-11-25 12:28:16.144337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.481 [2024-11-25 12:28:16.144343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.481 [2024-11-25 12:28:16.147357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.481 [2024-11-25 12:28:16.147383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.481 [2024-11-25 12:28:16.147389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.481 [2024-11-25 12:28:16.150380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.481 [2024-11-25 12:28:16.150405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.481 [2024-11-25 12:28:16.150411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.481 [2024-11-25 12:28:16.153406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.481 [2024-11-25 12:28:16.153431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.481 [2024-11-25 12:28:16.153436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.481 [2024-11-25 12:28:16.156447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.481 [2024-11-25 12:28:16.156571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.481 [2024-11-25 12:28:16.156579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.481 [2024-11-25 12:28:16.159689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.481 [2024-11-25 12:28:16.159716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.481 [2024-11-25 12:28:16.159723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.481 [2024-11-25 12:28:16.162701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.481 [2024-11-25 12:28:16.162727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.481 [2024-11-25 12:28:16.162733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.481 [2024-11-25 12:28:16.165762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.481 [2024-11-25 12:28:16.165785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.481 [2024-11-25 12:28:16.165791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.481 [2024-11-25 12:28:16.168703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.481 [2024-11-25 12:28:16.168729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.481 [2024-11-25 12:28:16.168735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.481 [2024-11-25 12:28:16.171717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.481 [2024-11-25 12:28:16.171834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.481 [2024-11-25 12:28:16.171842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.481 [2024-11-25 12:28:16.174816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.481 [2024-11-25 12:28:16.174839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.481 [2024-11-25 12:28:16.174845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.481 [2024-11-25 12:28:16.177823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.481 [2024-11-25 12:28:16.177846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.481 [2024-11-25 12:28:16.177853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.481 [2024-11-25 12:28:16.180861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.481 [2024-11-25 12:28:16.180886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.481 [2024-11-25 12:28:16.180892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.481 [2024-11-25 12:28:16.183901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.481 [2024-11-25 12:28:16.183927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.481 [2024-11-25 12:28:16.183933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.481 [2024-11-25 12:28:16.186919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.481 [2024-11-25 12:28:16.186943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.481 [2024-11-25 12:28:16.186949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.481 [2024-11-25 12:28:16.190013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.481 [2024-11-25 12:28:16.190039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.481 [2024-11-25 12:28:16.190045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.481 [2024-11-25 12:28:16.193019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.481 [2024-11-25 12:28:16.193044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.481 [2024-11-25 12:28:16.193049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.481 [2024-11-25 12:28:16.196152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.482 [2024-11-25 12:28:16.196178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.482 [2024-11-25 12:28:16.196184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.482 [2024-11-25 12:28:16.199273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.482 [2024-11-25 12:28:16.199374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.482 [2024-11-25 12:28:16.199382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.482 [2024-11-25 12:28:16.202399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.482 [2024-11-25 12:28:16.202427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.482 [2024-11-25 12:28:16.202432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.482 [2024-11-25 12:28:16.205348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.482 [2024-11-25 12:28:16.205373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.482 [2024-11-25 12:28:16.205379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.482 [2024-11-25 12:28:16.208366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.482 [2024-11-25 12:28:16.208392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.482 [2024-11-25 12:28:16.208398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.482 [2024-11-25 12:28:16.211358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.482 [2024-11-25 12:28:16.211458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.482 [2024-11-25 12:28:16.211466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.482 [2024-11-25 12:28:16.214456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.482 [2024-11-25 12:28:16.214484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.482 [2024-11-25 12:28:16.214490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.482 [2024-11-25 12:28:16.217465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.482 [2024-11-25 12:28:16.217491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.482 [2024-11-25 12:28:16.217497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.482 [2024-11-25 12:28:16.220513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.482 [2024-11-25 12:28:16.220540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.482 [2024-11-25 12:28:16.220545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.482 [2024-11-25 12:28:16.223583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.482 [2024-11-25 12:28:16.223681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.482 [2024-11-25 12:28:16.223689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.482 [2024-11-25 12:28:16.226695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.482 [2024-11-25 12:28:16.226723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.482 [2024-11-25 12:28:16.226729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.482 [2024-11-25 12:28:16.229711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.482 [2024-11-25 12:28:16.229736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.482 [2024-11-25 12:28:16.229742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.482 [2024-11-25 12:28:16.232737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.482 [2024-11-25 12:28:16.232772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.482 [2024-11-25 12:28:16.232777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.482 [2024-11-25 12:28:16.235718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.482 [2024-11-25 12:28:16.235831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.482 [2024-11-25 12:28:16.235838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.482 [2024-11-25 12:28:16.238803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.482 [2024-11-25 12:28:16.238827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.482 [2024-11-25 12:28:16.238833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.482 [2024-11-25 12:28:16.241887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.482 [2024-11-25 12:28:16.241914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.482 [2024-11-25 12:28:16.241920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.482 [2024-11-25 12:28:16.244848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.482 [2024-11-25 12:28:16.244872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.482 [2024-11-25 12:28:16.244878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.482 [2024-11-25 12:28:16.247916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.482 [2024-11-25 12:28:16.247942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.482 [2024-11-25 12:28:16.247948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.482 [2024-11-25 12:28:16.250951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.482 [2024-11-25 12:28:16.250977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.482 [2024-11-25 12:28:16.250983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.482 [2024-11-25 12:28:16.253985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.482 [2024-11-25 12:28:16.254011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.483 [2024-11-25 12:28:16.254017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.483 [2024-11-25 12:28:16.256985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.483 [2024-11-25 12:28:16.257010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.483 [2024-11-25 12:28:16.257015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.483 [2024-11-25 12:28:16.259969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.483 [2024-11-25 12:28:16.259995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.483 [2024-11-25 12:28:16.260000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.483 [2024-11-25 12:28:16.262960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.483 [2024-11-25 12:28:16.262986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.483 [2024-11-25 12:28:16.262992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.483 [2024-11-25 12:28:16.265963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.483 [2024-11-25 12:28:16.265988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.483 [2024-11-25 12:28:16.265994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.483 [2024-11-25 12:28:16.268970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.483 [2024-11-25 12:28:16.268995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.483 [2024-11-25 12:28:16.269001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.483 [2024-11-25 12:28:16.271969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.483 [2024-11-25 12:28:16.271995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.483 [2024-11-25 12:28:16.272001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.483 [2024-11-25 12:28:16.274958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.483 [2024-11-25 12:28:16.274984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.483 [2024-11-25 12:28:16.274990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.483 [2024-11-25 12:28:16.277978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.483 [2024-11-25 12:28:16.278004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.483 [2024-11-25 12:28:16.278010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:20.483 [2024-11-25 12:28:16.281022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.483 [2024-11-25 12:28:16.281048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.483 [2024-11-25 12:28:16.281054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:20.483 [2024-11-25 12:28:16.284022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.483 [2024-11-25 12:28:16.284048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.483 [2024-11-25 12:28:16.284053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:20.483 10614.50 IOPS, 1326.81 MiB/s [2024-11-25T12:28:16.359Z] [2024-11-25 12:28:16.288199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x174a720) 00:29:20.483 [2024-11-25 12:28:16.288221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.483 [2024-11-25 12:28:16.288227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:20.483 00:29:20.483 Latency(us) 00:29:20.483 [2024-11-25T12:28:16.359Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:20.483 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:20.483 nvme0n1 : 2.00 10611.47 1326.43 0.00 0.00 1505.23 482.07 5318.50 00:29:20.483 [2024-11-25T12:28:16.359Z] =================================================================================================================== 00:29:20.483 [2024-11-25T12:28:16.359Z] Total : 10611.47 1326.43 0.00 0.00 1505.23 482.07 5318.50 00:29:20.483 { 00:29:20.483 "results": [ 00:29:20.483 { 00:29:20.483 "job": "nvme0n1", 00:29:20.483 "core_mask": "0x2", 00:29:20.483 "workload": "randread", 00:29:20.483 "status": "finished", 00:29:20.483 "queue_depth": 16, 00:29:20.483 "io_size": 131072, 00:29:20.483 "runtime": 2.002079, 00:29:20.483 "iops": 10611.469377582003, 00:29:20.483 "mibps": 1326.4336721977504, 00:29:20.483 "io_failed": 0, 00:29:20.483 "io_timeout": 0, 00:29:20.483 "avg_latency_us": 1505.225905244673, 00:29:20.483 "min_latency_us": 482.0676923076923, 00:29:20.483 "max_latency_us": 5318.498461538462 00:29:20.483 } 00:29:20.483 ], 00:29:20.483 "core_count": 1 00:29:20.483 } 00:29:20.483 12:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:20.483 12:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:20.483 | .driver_specific 00:29:20.483 | .nvme_error 00:29:20.483 | .status_code 00:29:20.483 | .command_transient_transport_error' 00:29:20.483 12:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:20.483 12:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:20.741 12:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 686 > 0 )) 00:29:20.741 12:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93657 00:29:20.741 12:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 93657 ']' 00:29:20.741 12:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 93657 00:29:20.741 12:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:20.741 12:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:20.741 12:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93657 00:29:20.741 killing process with pid 93657 00:29:20.741 Received shutdown signal, test time was about 2.000000 seconds 00:29:20.741 00:29:20.741 Latency(us) 00:29:20.741 [2024-11-25T12:28:16.617Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:20.741 [2024-11-25T12:28:16.617Z] =================================================================================================================== 00:29:20.741 [2024-11-25T12:28:16.617Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:20.741 12:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:20.741 12:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:20.741 12:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93657' 00:29:20.741 12:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 93657 00:29:20.741 12:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 93657 00:29:20.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:20.999 12:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:20.999 12:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:20.999 12:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:20.999 12:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:20.999 12:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:21.000 12:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93742 00:29:21.000 12:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93742 /var/tmp/bperf.sock 00:29:21.000 12:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 93742 ']' 00:29:21.000 12:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:21.000 12:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:21.000 12:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:21.000 12:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:21.000 12:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:21.000 12:28:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:21.000 [2024-11-25 12:28:16.720216] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:29:21.000 [2024-11-25 12:28:16.720483] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93742 ] 00:29:21.000 [2024-11-25 12:28:16.855787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:21.257 [2024-11-25 12:28:16.892453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:21.821 12:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:21.821 12:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:21.821 12:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:21.821 12:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:22.078 12:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:22.078 12:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.078 12:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:22.078 12:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.078 12:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:22.078 12:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:22.335 nvme0n1 00:29:22.335 12:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:22.335 12:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.335 12:28:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:22.335 12:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.335 12:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:22.335 12:28:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:22.335 Running I/O for 2 seconds... 00:29:22.335 [2024-11-25 12:28:18.105768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f3a28 00:29:22.335 [2024-11-25 12:28:18.106644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.335 [2024-11-25 12:28:18.107513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:22.335 [2024-11-25 12:28:18.117074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e6b70 00:29:22.335 [2024-11-25 12:28:18.118397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.335 [2024-11-25 12:28:18.118503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:22.335 [2024-11-25 12:28:18.124904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166de470 00:29:22.335 [2024-11-25 12:28:18.126243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.335 [2024-11-25 12:28:18.126267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:22.335 [2024-11-25 12:28:18.134087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166ff3c8 00:29:22.335 [2024-11-25 12:28:18.134863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:14559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.336 [2024-11-25 12:28:18.134887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:22.336 [2024-11-25 12:28:18.142224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166efae0 00:29:22.336 [2024-11-25 12:28:18.142837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:14215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.336 [2024-11-25 12:28:18.142861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:22.336 [2024-11-25 12:28:18.150363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e6fa8 00:29:22.336 [2024-11-25 12:28:18.150897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.336 [2024-11-25 12:28:18.150922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:22.336 [2024-11-25 12:28:18.160119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e5658 00:29:22.336 [2024-11-25 12:28:18.161294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.336 [2024-11-25 12:28:18.161392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:22.336 [2024-11-25 12:28:18.168320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166eb760 00:29:22.336 [2024-11-25 12:28:18.169292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.336 [2024-11-25 12:28:18.169314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:22.336 [2024-11-25 12:28:18.176477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e3060 00:29:22.336 [2024-11-25 12:28:18.177348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.336 [2024-11-25 12:28:18.177371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:22.336 [2024-11-25 12:28:18.184589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166fd208 00:29:22.336 [2024-11-25 12:28:18.185336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.336 [2024-11-25 12:28:18.185357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:22.336 [2024-11-25 12:28:18.192708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f3a28 00:29:22.336 [2024-11-25 12:28:18.193354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.336 [2024-11-25 12:28:18.193376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:22.336 [2024-11-25 12:28:18.200807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166ea680 00:29:22.336 [2024-11-25 12:28:18.201318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.336 [2024-11-25 12:28:18.201340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:22.594 [2024-11-25 12:28:18.211527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e3d08 00:29:22.594 [2024-11-25 12:28:18.212838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.594 [2024-11-25 12:28:18.212859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:22.594 [2024-11-25 12:28:18.218456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f1868 00:29:22.594 [2024-11-25 12:28:18.219285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.594 [2024-11-25 12:28:18.219304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:22.594 [2024-11-25 12:28:18.226627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f5378 00:29:22.594 [2024-11-25 12:28:18.227289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.594 [2024-11-25 12:28:18.227312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:22.594 [2024-11-25 12:28:18.236626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166dfdc0 00:29:22.594 [2024-11-25 12:28:18.237629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:9676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.594 [2024-11-25 12:28:18.237653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:22.594 [2024-11-25 12:28:18.244775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166eee38 00:29:22.594 [2024-11-25 12:28:18.245631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.594 [2024-11-25 12:28:18.245653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:22.594 [2024-11-25 12:28:18.252872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166fac10 00:29:22.594 [2024-11-25 12:28:18.253626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.594 [2024-11-25 12:28:18.253648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:22.594 [2024-11-25 12:28:18.260948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166fe2e8 00:29:22.594 [2024-11-25 12:28:18.261579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.594 [2024-11-25 12:28:18.261601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:22.594 [2024-11-25 12:28:18.269063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f7970 00:29:22.594 [2024-11-25 12:28:18.269594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.594 [2024-11-25 12:28:18.269611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:22.594 [2024-11-25 12:28:18.279797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e6fa8 00:29:22.594 [2024-11-25 12:28:18.281047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:19158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.594 [2024-11-25 12:28:18.281068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:22.594 [2024-11-25 12:28:18.287874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166ebb98 00:29:22.594 [2024-11-25 12:28:18.288990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.594 [2024-11-25 12:28:18.289011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:22.594 [2024-11-25 12:28:18.295997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166ee190 00:29:22.594 [2024-11-25 12:28:18.297012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.594 [2024-11-25 12:28:18.297033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:22.594 [2024-11-25 12:28:18.304115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e84c0 00:29:22.594 [2024-11-25 12:28:18.305035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.594 [2024-11-25 12:28:18.305056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:22.594 [2024-11-25 12:28:18.312466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166de470 00:29:22.594 [2024-11-25 12:28:18.313281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.594 [2024-11-25 12:28:18.313303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:22.594 [2024-11-25 12:28:18.320789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e99d8 00:29:22.594 [2024-11-25 12:28:18.321472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.594 [2024-11-25 12:28:18.321494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:22.594 [2024-11-25 12:28:18.329779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166fd640 00:29:22.594 [2024-11-25 12:28:18.330247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.594 [2024-11-25 12:28:18.330265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:22.594 [2024-11-25 12:28:18.339790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166fdeb0 00:29:22.594 [2024-11-25 12:28:18.340846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.595 [2024-11-25 12:28:18.340867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:22.595 [2024-11-25 12:28:18.347791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f4b08 00:29:22.595 [2024-11-25 12:28:18.348643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:9421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.595 [2024-11-25 12:28:18.348747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:22.595 [2024-11-25 12:28:18.356352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e1f80 00:29:22.595 [2024-11-25 12:28:18.357326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:4470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.595 [2024-11-25 12:28:18.357348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:22.595 [2024-11-25 12:28:18.367032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e6b70 00:29:22.595 [2024-11-25 12:28:18.368538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.595 [2024-11-25 12:28:18.368626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:22.595 [2024-11-25 12:28:18.373258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166fc998 00:29:22.595 [2024-11-25 12:28:18.373964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.595 [2024-11-25 12:28:18.373985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:22.595 [2024-11-25 12:28:18.383479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f8618 00:29:22.595 [2024-11-25 12:28:18.384663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.595 [2024-11-25 12:28:18.384684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:22.595 [2024-11-25 12:28:18.391408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f2948 00:29:22.595 [2024-11-25 12:28:18.392401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.595 [2024-11-25 12:28:18.392426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:22.595 [2024-11-25 12:28:18.399743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f1868 00:29:22.595 [2024-11-25 12:28:18.400718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.595 [2024-11-25 12:28:18.400741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:22.595 [2024-11-25 12:28:18.407675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e6b70 00:29:22.595 [2024-11-25 12:28:18.408401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.595 [2024-11-25 12:28:18.408425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:22.595 [2024-11-25 12:28:18.416009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166eaef0 00:29:22.595 [2024-11-25 12:28:18.416775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.595 [2024-11-25 12:28:18.416796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:22.595 [2024-11-25 12:28:18.426226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166de038 00:29:22.595 [2024-11-25 12:28:18.427540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.595 [2024-11-25 12:28:18.427633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:22.595 [2024-11-25 12:28:18.432429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f4298 00:29:22.595 [2024-11-25 12:28:18.432986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.595 [2024-11-25 12:28:18.433007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:22.595 [2024-11-25 12:28:18.442637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166ef6a8 00:29:22.595 [2024-11-25 12:28:18.443674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.595 [2024-11-25 12:28:18.443696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.595 [2024-11-25 12:28:18.450573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e3060 00:29:22.595 [2024-11-25 12:28:18.451369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.595 [2024-11-25 12:28:18.451393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.595 [2024-11-25 12:28:18.458909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f6458 00:29:22.595 [2024-11-25 12:28:18.459724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.595 [2024-11-25 12:28:18.459745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:22.853 [2024-11-25 12:28:18.469372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166ed4e8 00:29:22.853 [2024-11-25 12:28:18.471715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.853 [2024-11-25 12:28:18.471807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:22.853 [2024-11-25 12:28:18.481522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166fb8b8 00:29:22.853 [2024-11-25 12:28:18.482042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.853 [2024-11-25 12:28:18.482146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:22.853 [2024-11-25 12:28:18.491837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f46d0 00:29:22.853 [2024-11-25 12:28:18.492903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.853 [2024-11-25 12:28:18.492922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:22.853 [2024-11-25 12:28:18.499848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f7da8 00:29:22.853 [2024-11-25 12:28:18.500587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:9876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.853 [2024-11-25 12:28:18.500611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:22.853 [2024-11-25 12:28:18.508177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f8618 00:29:22.853 [2024-11-25 12:28:18.508964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.853 [2024-11-25 12:28:18.509051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:22.853 [2024-11-25 12:28:18.518462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e3060 00:29:22.853 [2024-11-25 12:28:18.519731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.853 [2024-11-25 12:28:18.519761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.853 [2024-11-25 12:28:18.524583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166dece0 00:29:22.853 [2024-11-25 12:28:18.525176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.853 [2024-11-25 12:28:18.525197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:22.853 [2024-11-25 12:28:18.535134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f2510 00:29:22.853 [2024-11-25 12:28:18.536289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.853 [2024-11-25 12:28:18.536313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:22.853 [2024-11-25 12:28:18.543386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f0350 00:29:22.853 [2024-11-25 12:28:18.544195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.854 [2024-11-25 12:28:18.544219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:22.854 [2024-11-25 12:28:18.551814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166de038 00:29:22.854 [2024-11-25 12:28:18.552650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.854 [2024-11-25 12:28:18.552671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:22.854 [2024-11-25 12:28:18.562013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166dfdc0 00:29:22.854 [2024-11-25 12:28:18.563454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:15492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.854 [2024-11-25 12:28:18.563544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:22.854 [2024-11-25 12:28:18.568364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e7c50 00:29:22.854 [2024-11-25 12:28:18.569071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.854 [2024-11-25 12:28:18.569150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:22.854 [2024-11-25 12:28:18.578826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166fc128 00:29:22.854 [2024-11-25 12:28:18.580013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:17549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.854 [2024-11-25 12:28:18.580106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:22.854 [2024-11-25 12:28:18.587453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e38d0 00:29:22.854 [2024-11-25 12:28:18.588347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.854 [2024-11-25 12:28:18.588457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:22.854 [2024-11-25 12:28:18.596229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e7818 00:29:22.854 [2024-11-25 12:28:18.597099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.854 [2024-11-25 12:28:18.597195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:22.854 [2024-11-25 12:28:18.604933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f1ca0 00:29:22.854 [2024-11-25 12:28:18.605895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.854 [2024-11-25 12:28:18.605984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:22.854 [2024-11-25 12:28:18.615327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e5658 00:29:22.854 [2024-11-25 12:28:18.616773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.854 [2024-11-25 12:28:18.616863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:22.854 [2024-11-25 12:28:18.621550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166ed920 00:29:22.854 [2024-11-25 12:28:18.622309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.854 [2024-11-25 12:28:18.622395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:22.854 [2024-11-25 12:28:18.631957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e4140 00:29:22.854 [2024-11-25 12:28:18.633186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:10073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.854 [2024-11-25 12:28:18.633274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:22.854 [2024-11-25 12:28:18.640127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e88f8 00:29:22.854 [2024-11-25 12:28:18.641040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.854 [2024-11-25 12:28:18.641128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:22.854 [2024-11-25 12:28:18.648653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e4de8 00:29:22.854 [2024-11-25 12:28:18.649550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.854 [2024-11-25 12:28:18.649641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:22.854 [2024-11-25 12:28:18.657050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166fc998 00:29:22.854 [2024-11-25 12:28:18.658062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:21857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.854 [2024-11-25 12:28:18.658164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:22.854 [2024-11-25 12:28:18.667475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f4f40 00:29:22.854 [2024-11-25 12:28:18.668976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.854 [2024-11-25 12:28:18.669062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:22.854 [2024-11-25 12:28:18.673729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166fe2e8 00:29:22.854 [2024-11-25 12:28:18.674555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.854 [2024-11-25 12:28:18.674644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:22.854 [2024-11-25 12:28:18.682458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f0788 00:29:22.854 [2024-11-25 12:28:18.683260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.854 [2024-11-25 12:28:18.683346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:22.854 [2024-11-25 12:28:18.692997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f20d8 00:29:22.854 [2024-11-25 12:28:18.694399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.854 [2024-11-25 12:28:18.694487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:22.854 [2024-11-25 12:28:18.699235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166eee38 00:29:22.854 [2024-11-25 12:28:18.699944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.854 [2024-11-25 12:28:18.700032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:22.854 [2024-11-25 12:28:18.709621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e9168 00:29:22.854 [2024-11-25 12:28:18.710827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.854 [2024-11-25 12:28:18.710905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:22.854 [2024-11-25 12:28:18.717693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166fa7d8 00:29:22.854 [2024-11-25 12:28:18.718584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.854 [2024-11-25 12:28:18.718669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:22.854 [2024-11-25 12:28:18.726054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f81e0 00:29:22.854 [2024-11-25 12:28:18.727035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:11219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:22.854 [2024-11-25 12:28:18.727120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:23.112 [2024-11-25 12:28:18.736583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166fd208 00:29:23.112 [2024-11-25 12:28:18.738088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.112 [2024-11-25 12:28:18.738182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:23.112 [2024-11-25 12:28:18.742987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166ed0b0 00:29:23.112 [2024-11-25 12:28:18.743743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:23950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.112 [2024-11-25 12:28:18.743836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:23.112 [2024-11-25 12:28:18.753379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166eb328 00:29:23.112 [2024-11-25 12:28:18.754630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.112 [2024-11-25 12:28:18.754717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:23.112 [2024-11-25 12:28:18.761494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166fd640 00:29:23.112 [2024-11-25 12:28:18.762439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.112 [2024-11-25 12:28:18.762528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:23.112 [2024-11-25 12:28:18.770023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166fc998 00:29:23.112 [2024-11-25 12:28:18.770945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:25264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.112 [2024-11-25 12:28:18.771031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:23.112 [2024-11-25 12:28:18.778415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166fdeb0 00:29:23.113 [2024-11-25 12:28:18.779444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.113 [2024-11-25 12:28:18.779530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:23.113 [2024-11-25 12:28:18.786831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166fb8b8 00:29:23.113 [2024-11-25 12:28:18.787528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.113 [2024-11-25 12:28:18.787636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:23.113 [2024-11-25 12:28:18.795240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f4f40 00:29:23.113 [2024-11-25 12:28:18.796066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.113 [2024-11-25 12:28:18.796155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:23.113 [2024-11-25 12:28:18.803961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f96f8 00:29:23.113 [2024-11-25 12:28:18.804781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:10720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.113 [2024-11-25 12:28:18.804871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:29:23.113 [2024-11-25 12:28:18.814071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e1b48 00:29:23.113 [2024-11-25 12:28:18.815366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:18842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.113 [2024-11-25 12:28:18.815457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:23.113 [2024-11-25 12:28:18.820610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166feb58 00:29:23.113 [2024-11-25 12:28:18.821207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.113 [2024-11-25 12:28:18.821311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:23.113 [2024-11-25 12:28:18.831024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166de038 00:29:23.113 [2024-11-25 12:28:18.832100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.113 [2024-11-25 12:28:18.832195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:23.113 [2024-11-25 12:28:18.839722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e73e0 00:29:23.113 [2024-11-25 12:28:18.840464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:15562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.113 [2024-11-25 12:28:18.840556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:23.113 [2024-11-25 12:28:18.847956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e5220 00:29:23.113 [2024-11-25 12:28:18.848601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.113 [2024-11-25 12:28:18.848691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:23.113 [2024-11-25 12:28:18.856124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e7c50 00:29:23.113 [2024-11-25 12:28:18.856559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.113 [2024-11-25 12:28:18.856578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:23.113 [2024-11-25 12:28:18.865842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f2d80 00:29:23.113 [2024-11-25 12:28:18.866879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.113 [2024-11-25 12:28:18.866963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:23.113 [2024-11-25 12:28:18.873956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e5658 00:29:23.113 [2024-11-25 12:28:18.874889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:21581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.113 [2024-11-25 12:28:18.874911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:23.113 [2024-11-25 12:28:18.882040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166de8a8 00:29:23.113 [2024-11-25 12:28:18.882849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.113 [2024-11-25 12:28:18.882871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:23.113 [2024-11-25 12:28:18.890172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e8d30 00:29:23.113 [2024-11-25 12:28:18.890873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.113 [2024-11-25 12:28:18.890896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:23.113 [2024-11-25 12:28:18.898243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e1710 00:29:23.113 [2024-11-25 12:28:18.898820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:9653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.113 [2024-11-25 12:28:18.898843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:23.113 [2024-11-25 12:28:18.908962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e3d08 00:29:23.113 [2024-11-25 12:28:18.910325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.113 [2024-11-25 12:28:18.910413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:23.113 [2024-11-25 12:28:18.914929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f1868 00:29:23.113 [2024-11-25 12:28:18.915508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.113 [2024-11-25 12:28:18.915531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:23.113 [2024-11-25 12:28:18.925138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e6b70 00:29:23.113 [2024-11-25 12:28:18.926214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.113 [2024-11-25 12:28:18.926236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:23.113 [2024-11-25 12:28:18.933063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f0bc0 00:29:23.113 [2024-11-25 12:28:18.933885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.113 [2024-11-25 12:28:18.933907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:23.113 [2024-11-25 12:28:18.941384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f20d8 00:29:23.113 [2024-11-25 12:28:18.942244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:14937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.113 [2024-11-25 12:28:18.942266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:23.113 [2024-11-25 12:28:18.951600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166ebb98 00:29:23.113 [2024-11-25 12:28:18.952942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.113 [2024-11-25 12:28:18.953026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:23.113 [2024-11-25 12:28:18.957729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e1b48 00:29:23.113 [2024-11-25 12:28:18.958380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.113 [2024-11-25 12:28:18.958403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:29:23.113 [2024-11-25 12:28:18.967948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f2510 00:29:23.113 [2024-11-25 12:28:18.969139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.113 [2024-11-25 12:28:18.969161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:23.113 [2024-11-25 12:28:18.975961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166ee190 00:29:23.113 [2024-11-25 12:28:18.976818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.113 [2024-11-25 12:28:18.976839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:23.113 [2024-11-25 12:28:18.984341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e27f0 00:29:23.113 [2024-11-25 12:28:18.985284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.113 [2024-11-25 12:28:18.985306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:23.373 [2024-11-25 12:28:18.993256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166fc998 00:29:23.373 [2024-11-25 12:28:18.994203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.373 [2024-11-25 12:28:18.994225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:23.373 [2024-11-25 12:28:19.002123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e2c28 00:29:23.373 [2024-11-25 12:28:19.002827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.373 [2024-11-25 12:28:19.002849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:23.373 [2024-11-25 12:28:19.010387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166edd58 00:29:23.373 [2024-11-25 12:28:19.011266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.373 [2024-11-25 12:28:19.011293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:23.373 [2024-11-25 12:28:19.018718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f1ca0 00:29:23.373 [2024-11-25 12:28:19.019638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.373 [2024-11-25 12:28:19.019659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:23.373 [2024-11-25 12:28:19.028945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e0630 00:29:23.373 [2024-11-25 12:28:19.030351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.373 [2024-11-25 12:28:19.030372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:23.373 [2024-11-25 12:28:19.035003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e95a0 00:29:23.373 [2024-11-25 12:28:19.035785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.373 [2024-11-25 12:28:19.035803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:23.373 [2024-11-25 12:28:19.043720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e1710 00:29:23.373 [2024-11-25 12:28:19.044434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.373 [2024-11-25 12:28:19.044457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:23.373 [2024-11-25 12:28:19.052875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e3d08 00:29:23.373 [2024-11-25 12:28:19.053681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.373 [2024-11-25 12:28:19.053703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:23.373 [2024-11-25 12:28:19.061049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166ed920 00:29:23.373 [2024-11-25 12:28:19.061747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.373 [2024-11-25 12:28:19.061803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:23.373 [2024-11-25 12:28:19.069137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f8e88 00:29:23.373 [2024-11-25 12:28:19.069719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:10611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.373 [2024-11-25 12:28:19.069736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:23.373 [2024-11-25 12:28:19.079861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166fc560 00:29:23.373 [2024-11-25 12:28:19.081154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.373 [2024-11-25 12:28:19.081175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:23.373 [2024-11-25 12:28:19.087965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166ecc78 00:29:23.373 [2024-11-25 12:28:19.089150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:18481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.373 [2024-11-25 12:28:19.089170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:23.373 [2024-11-25 12:28:19.096063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e7c50 00:29:23.373 29281.00 IOPS, 114.38 MiB/s [2024-11-25T12:28:19.249Z] [2024-11-25 12:28:19.097134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.373 [2024-11-25 12:28:19.097156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:23.373 [2024-11-25 12:28:19.104159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166ddc00 00:29:23.373 [2024-11-25 12:28:19.105119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.373 [2024-11-25 12:28:19.105141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:23.373 [2024-11-25 12:28:19.112246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e84c0 00:29:23.373 [2024-11-25 12:28:19.113092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.373 [2024-11-25 12:28:19.113114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:23.373 [2024-11-25 12:28:19.120358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166feb58 00:29:23.373 [2024-11-25 12:28:19.121094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.373 [2024-11-25 12:28:19.121115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:23.373 [2024-11-25 12:28:19.128442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e12d8 00:29:23.373 [2024-11-25 12:28:19.129060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:21285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.373 [2024-11-25 12:28:19.129089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:23.373 [2024-11-25 12:28:19.136553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e2c28 00:29:23.373 [2024-11-25 12:28:19.137065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:2562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.373 [2024-11-25 12:28:19.137087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:23.373 [2024-11-25 12:28:19.146625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e27f0 00:29:23.374 [2024-11-25 12:28:19.147343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:25395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.374 [2024-11-25 12:28:19.147362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:23.374 [2024-11-25 12:28:19.154809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e8088 00:29:23.374 [2024-11-25 12:28:19.155322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.374 [2024-11-25 12:28:19.155420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:23.374 [2024-11-25 12:28:19.162917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166fe2e8 00:29:23.374 [2024-11-25 12:28:19.163330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:12840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.374 [2024-11-25 12:28:19.163348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:23.374 [2024-11-25 12:28:19.172631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f0bc0 00:29:23.374 [2024-11-25 12:28:19.173624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.374 [2024-11-25 12:28:19.173646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:23.374 [2024-11-25 12:28:19.180727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166fc128 00:29:23.374 [2024-11-25 12:28:19.181606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.374 [2024-11-25 12:28:19.181693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:23.374 [2024-11-25 12:28:19.188625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166fd208 00:29:23.374 [2024-11-25 12:28:19.189359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.374 [2024-11-25 12:28:19.189382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:23.374 [2024-11-25 12:28:19.196952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166eaef0 00:29:23.374 [2024-11-25 12:28:19.197722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.374 [2024-11-25 12:28:19.197744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:23.374 [2024-11-25 12:28:19.205591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e1f80 00:29:23.374 [2024-11-25 12:28:19.206376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.374 [2024-11-25 12:28:19.206397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:23.374 [2024-11-25 12:28:19.214411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166fdeb0 00:29:23.374 [2024-11-25 12:28:19.215050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.374 [2024-11-25 12:28:19.215068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:23.374 [2024-11-25 12:28:19.222610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e73e0 00:29:23.374 [2024-11-25 12:28:19.223046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.374 [2024-11-25 12:28:19.223097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:23.374 [2024-11-25 12:28:19.232339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e12d8 00:29:23.374 [2024-11-25 12:28:19.233352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.374 [2024-11-25 12:28:19.233376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:23.374 [2024-11-25 12:28:19.240452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166ff3c8 00:29:23.374 [2024-11-25 12:28:19.241355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:21440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.374 [2024-11-25 12:28:19.241377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:23.633 [2024-11-25 12:28:19.248547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166ee190 00:29:23.633 [2024-11-25 12:28:19.249335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.633 [2024-11-25 12:28:19.249357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:23.633 [2024-11-25 12:28:19.256662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e3060 00:29:23.633 [2024-11-25 12:28:19.257345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.633 [2024-11-25 12:28:19.257367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:23.633 [2024-11-25 12:28:19.264776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166eb328 00:29:23.633 [2024-11-25 12:28:19.265338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.633 [2024-11-25 12:28:19.265361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:23.633 [2024-11-25 12:28:19.275507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166efae0 00:29:23.633 [2024-11-25 12:28:19.276787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:3461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.633 [2024-11-25 12:28:19.276809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:23.633 [2024-11-25 12:28:19.282337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166eb328 00:29:23.633 [2024-11-25 12:28:19.283141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.633 [2024-11-25 12:28:19.283163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:23.633 [2024-11-25 12:28:19.290433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166ea680 00:29:23.633 [2024-11-25 12:28:19.291119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.633 [2024-11-25 12:28:19.291142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:29:23.633 [2024-11-25 12:28:19.298546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f6cc8 00:29:23.633 [2024-11-25 12:28:19.299126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.633 [2024-11-25 12:28:19.299148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:23.633 [2024-11-25 12:28:19.308601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166efae0 00:29:23.633 [2024-11-25 12:28:19.309387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.633 [2024-11-25 12:28:19.309405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:23.633 [2024-11-25 12:28:19.316791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e3060 00:29:23.633 [2024-11-25 12:28:19.317370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:18405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.633 [2024-11-25 12:28:19.317394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.633 [2024-11-25 12:28:19.324902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166ea248 00:29:23.633 [2024-11-25 12:28:19.325380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.633 [2024-11-25 12:28:19.325403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:23.633 [2024-11-25 12:28:19.334615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f1430 00:29:23.633 [2024-11-25 12:28:19.335669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.633 [2024-11-25 12:28:19.335692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:23.633 [2024-11-25 12:28:19.342697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f92c0 00:29:23.633 [2024-11-25 12:28:19.343648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.633 [2024-11-25 12:28:19.343671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:23.633 [2024-11-25 12:28:19.350820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f6458 00:29:23.633 [2024-11-25 12:28:19.351640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:3416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.633 [2024-11-25 12:28:19.351661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:23.633 [2024-11-25 12:28:19.358908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f5378 00:29:23.633 [2024-11-25 12:28:19.359613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.633 [2024-11-25 12:28:19.359635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:23.633 [2024-11-25 12:28:19.367053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166fa3a0 00:29:23.633 [2024-11-25 12:28:19.367650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.633 [2024-11-25 12:28:19.367663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:29:23.633 [2024-11-25 12:28:19.377856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166fcdd0 00:29:23.634 [2024-11-25 12:28:19.379291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:25015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.634 [2024-11-25 12:28:19.379314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:23.634 [2024-11-25 12:28:19.386544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e27f0 00:29:23.634 [2024-11-25 12:28:19.387893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.634 [2024-11-25 12:28:19.387982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:23.634 [2024-11-25 12:28:19.392738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166fbcf0 00:29:23.634 [2024-11-25 12:28:19.393354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:18029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.634 [2024-11-25 12:28:19.393376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:23.634 [2024-11-25 12:28:19.403160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166ef6a8 00:29:23.634 [2024-11-25 12:28:19.404357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.634 [2024-11-25 12:28:19.404447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:23.634 [2024-11-25 12:28:19.411367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166eea00 00:29:23.634 [2024-11-25 12:28:19.412190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.634 [2024-11-25 12:28:19.412282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:23.634 [2024-11-25 12:28:19.419775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e23b8 00:29:23.634 [2024-11-25 12:28:19.420721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.634 [2024-11-25 12:28:19.420823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:23.634 [2024-11-25 12:28:19.428488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e8d30 00:29:23.634 [2024-11-25 12:28:19.429435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.634 [2024-11-25 12:28:19.429526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:23.634 [2024-11-25 12:28:19.436744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f5be8 00:29:23.634 [2024-11-25 12:28:19.437492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.634 [2024-11-25 12:28:19.437584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:23.634 [2024-11-25 12:28:19.445292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e84c0 00:29:23.634 [2024-11-25 12:28:19.446013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.634 [2024-11-25 12:28:19.446102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:23.634 [2024-11-25 12:28:19.453672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f7538 00:29:23.634 [2024-11-25 12:28:19.454528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.634 [2024-11-25 12:28:19.454615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:23.634 [2024-11-25 12:28:19.464070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f2948 00:29:23.634 [2024-11-25 12:28:19.465393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.634 [2024-11-25 12:28:19.465479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:23.634 [2024-11-25 12:28:19.470283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f6020 00:29:23.634 [2024-11-25 12:28:19.470928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.634 [2024-11-25 12:28:19.471016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.634 [2024-11-25 12:28:19.480687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e99d8 00:29:23.634 [2024-11-25 12:28:19.481812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.634 [2024-11-25 12:28:19.481898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:23.634 [2024-11-25 12:28:19.488796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e9168 00:29:23.634 [2024-11-25 12:28:19.489582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.634 [2024-11-25 12:28:19.489670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:23.634 [2024-11-25 12:28:19.497315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f31b8 00:29:23.634 [2024-11-25 12:28:19.498120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.634 [2024-11-25 12:28:19.498223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.634 [2024-11-25 12:28:19.505714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f46d0 00:29:23.634 [2024-11-25 12:28:19.506624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.634 [2024-11-25 12:28:19.506712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:23.892 [2024-11-25 12:28:19.516233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e23b8 00:29:23.892 [2024-11-25 12:28:19.517607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.892 [2024-11-25 12:28:19.517684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:23.892 [2024-11-25 12:28:19.522417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e23b8 00:29:23.892 [2024-11-25 12:28:19.523103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:15357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.892 [2024-11-25 12:28:19.523175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:23.892 [2024-11-25 12:28:19.532791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f46d0 00:29:23.892 [2024-11-25 12:28:19.533954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.892 [2024-11-25 12:28:19.534038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:23.892 [2024-11-25 12:28:19.540923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f6890 00:29:23.892 [2024-11-25 12:28:19.541795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:8673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.893 [2024-11-25 12:28:19.541881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:23.893 [2024-11-25 12:28:19.549309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f92c0 00:29:23.893 [2024-11-25 12:28:19.550276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:18174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.893 [2024-11-25 12:28:19.550368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:23.893 [2024-11-25 12:28:19.559955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e0a68 00:29:23.893 [2024-11-25 12:28:19.561430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:14923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.893 [2024-11-25 12:28:19.561514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:23.893 [2024-11-25 12:28:19.566268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166ea248 00:29:23.893 [2024-11-25 12:28:19.567013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.893 [2024-11-25 12:28:19.567099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:23.893 [2024-11-25 12:28:19.576666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f1ca0 00:29:23.893 [2024-11-25 12:28:19.577905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.893 [2024-11-25 12:28:19.577990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:23.893 [2024-11-25 12:28:19.585570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e4140 00:29:23.893 [2024-11-25 12:28:19.586847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:6641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.893 [2024-11-25 12:28:19.586934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:23.893 [2024-11-25 12:28:19.592149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e6b70 00:29:23.893 [2024-11-25 12:28:19.592818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.893 [2024-11-25 12:28:19.592904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:23.893 [2024-11-25 12:28:19.602626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e27f0 00:29:23.893 [2024-11-25 12:28:19.603765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.893 [2024-11-25 12:28:19.603850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:23.893 [2024-11-25 12:28:19.611503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e9168 00:29:23.893 [2024-11-25 12:28:19.612661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:9433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.893 [2024-11-25 12:28:19.612756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:23.893 [2024-11-25 12:28:19.620003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e4de8 00:29:23.893 [2024-11-25 12:28:19.620971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.893 [2024-11-25 12:28:19.621060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:23.893 [2024-11-25 12:28:19.628642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166ec408 00:29:23.893 [2024-11-25 12:28:19.629707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.893 [2024-11-25 12:28:19.629810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:23.893 [2024-11-25 12:28:19.637031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166de038 00:29:23.893 [2024-11-25 12:28:19.637788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:14701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.893 [2024-11-25 12:28:19.637877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:23.893 [2024-11-25 12:28:19.645811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166df550 00:29:23.893 [2024-11-25 12:28:19.646542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.893 [2024-11-25 12:28:19.646631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:23.893 [2024-11-25 12:28:19.654569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166fc128 00:29:23.893 [2024-11-25 12:28:19.655309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.893 [2024-11-25 12:28:19.655398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:23.893 [2024-11-25 12:28:19.663192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166ebfd0 00:29:23.893 [2024-11-25 12:28:19.664000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.893 [2024-11-25 12:28:19.664085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:23.893 [2024-11-25 12:28:19.673570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e5220 00:29:23.893 [2024-11-25 12:28:19.674869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.893 [2024-11-25 12:28:19.674953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:23.893 [2024-11-25 12:28:19.679806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166fbcf0 00:29:23.893 [2024-11-25 12:28:19.680388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.893 [2024-11-25 12:28:19.680406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:23.893 [2024-11-25 12:28:19.690076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166eaab8 00:29:23.893 [2024-11-25 12:28:19.691097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.893 [2024-11-25 12:28:19.691118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:23.893 [2024-11-25 12:28:19.698012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f20d8 00:29:23.893 [2024-11-25 12:28:19.698785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.893 [2024-11-25 12:28:19.698807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:23.893 [2024-11-25 12:28:19.706345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166fb048 00:29:23.893 [2024-11-25 12:28:19.707148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.893 [2024-11-25 12:28:19.707169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:23.893 [2024-11-25 12:28:19.716539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f5378 00:29:23.893 [2024-11-25 12:28:19.717830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.893 [2024-11-25 12:28:19.717909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:23.893 [2024-11-25 12:28:19.722658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166fa3a0 00:29:23.893 [2024-11-25 12:28:19.723257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:14599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.893 [2024-11-25 12:28:19.723279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:23.893 [2024-11-25 12:28:19.731309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e9e10 00:29:23.893 [2024-11-25 12:28:19.731971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.893 [2024-11-25 12:28:19.731989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:23.893 [2024-11-25 12:28:19.741444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e88f8 00:29:23.893 [2024-11-25 12:28:19.742165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.894 [2024-11-25 12:28:19.742188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:23.894 [2024-11-25 12:28:19.749542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166fc560 00:29:23.894 [2024-11-25 12:28:19.750180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:6381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.894 [2024-11-25 12:28:19.750203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:23.894 [2024-11-25 12:28:19.759616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e01f8 00:29:23.894 [2024-11-25 12:28:19.761031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.894 [2024-11-25 12:28:19.761115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:23.894 [2024-11-25 12:28:19.765762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f4b08 00:29:23.894 [2024-11-25 12:28:19.766482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:23.894 [2024-11-25 12:28:19.766504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:24.152 [2024-11-25 12:28:19.775973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f1868 00:29:24.152 [2024-11-25 12:28:19.777162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:4889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.152 [2024-11-25 12:28:19.777182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:24.152 [2024-11-25 12:28:19.783918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f6458 00:29:24.152 [2024-11-25 12:28:19.784869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:17837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.152 [2024-11-25 12:28:19.784891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:24.152 [2024-11-25 12:28:19.792243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166ef6a8 00:29:24.152 [2024-11-25 12:28:19.793228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.152 [2024-11-25 12:28:19.793248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:24.152 [2024-11-25 12:28:19.800168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166fd208 00:29:24.152 [2024-11-25 12:28:19.800909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.152 [2024-11-25 12:28:19.800931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:24.152 [2024-11-25 12:28:19.808491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166eb760 00:29:24.152 [2024-11-25 12:28:19.809264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.152 [2024-11-25 12:28:19.809286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:24.152 [2024-11-25 12:28:19.819052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f8618 00:29:24.152 [2024-11-25 12:28:19.820303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.152 [2024-11-25 12:28:19.820388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:24.152 [2024-11-25 12:28:19.825177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166fd640 00:29:24.152 [2024-11-25 12:28:19.825733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:25141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.152 [2024-11-25 12:28:19.825761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:24.152 [2024-11-25 12:28:19.835387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166fcdd0 00:29:24.152 [2024-11-25 12:28:19.836429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.152 [2024-11-25 12:28:19.836451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:24.152 [2024-11-25 12:28:19.843325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166df550 00:29:24.152 [2024-11-25 12:28:19.844123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.152 [2024-11-25 12:28:19.844146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:24.152 [2024-11-25 12:28:19.851658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e6738 00:29:24.152 [2024-11-25 12:28:19.852493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:25510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.152 [2024-11-25 12:28:19.852515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:29:24.152 [2024-11-25 12:28:19.861877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e88f8 00:29:24.152 [2024-11-25 12:28:19.863267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.152 [2024-11-25 12:28:19.863352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:24.152 [2024-11-25 12:28:19.868073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e88f8 00:29:24.152 [2024-11-25 12:28:19.868693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:10358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.152 [2024-11-25 12:28:19.868714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:24.152 [2024-11-25 12:28:19.876712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166de8a8 00:29:24.152 [2024-11-25 12:28:19.877333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.152 [2024-11-25 12:28:19.877354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:24.152 [2024-11-25 12:28:19.884838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166ebfd0 00:29:24.152 [2024-11-25 12:28:19.885412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.152 [2024-11-25 12:28:19.885429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:24.153 [2024-11-25 12:28:19.894979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f6458 00:29:24.153 [2024-11-25 12:28:19.895632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.153 [2024-11-25 12:28:19.895655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:24.153 [2024-11-25 12:28:19.903080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f57b0 00:29:24.153 [2024-11-25 12:28:19.903586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.153 [2024-11-25 12:28:19.903609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:24.153 [2024-11-25 12:28:19.911198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166fa3a0 00:29:24.153 [2024-11-25 12:28:19.911626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.153 [2024-11-25 12:28:19.911644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:24.153 [2024-11-25 12:28:19.920945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f1868 00:29:24.153 [2024-11-25 12:28:19.921943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.153 [2024-11-25 12:28:19.922030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:24.153 [2024-11-25 12:28:19.929089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166eaef0 00:29:24.153 [2024-11-25 12:28:19.929988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.153 [2024-11-25 12:28:19.930011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:24.153 [2024-11-25 12:28:19.937398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166ea680 00:29:24.153 [2024-11-25 12:28:19.938202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.153 [2024-11-25 12:28:19.938225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:24.153 [2024-11-25 12:28:19.945679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f0350 00:29:24.153 [2024-11-25 12:28:19.946346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.153 [2024-11-25 12:28:19.946370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:24.153 [2024-11-25 12:28:19.953802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f6458 00:29:24.153 [2024-11-25 12:28:19.954349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.153 [2024-11-25 12:28:19.954370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:24.153 [2024-11-25 12:28:19.964534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166de8a8 00:29:24.153 [2024-11-25 12:28:19.965868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:23016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.153 [2024-11-25 12:28:19.965884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:24.153 [2024-11-25 12:28:19.972695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f1430 00:29:24.153 [2024-11-25 12:28:19.973833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.153 [2024-11-25 12:28:19.973853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:24.153 [2024-11-25 12:28:19.980814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f7100 00:29:24.153 [2024-11-25 12:28:19.981845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.153 [2024-11-25 12:28:19.981866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:24.153 [2024-11-25 12:28:19.988508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e8d30 00:29:24.153 [2024-11-25 12:28:19.989254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.153 [2024-11-25 12:28:19.989343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:24.153 [2024-11-25 12:28:19.996846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e1710 00:29:24.153 [2024-11-25 12:28:19.997626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.153 [2024-11-25 12:28:19.997648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:24.153 [2024-11-25 12:28:20.007171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e95a0 00:29:24.153 [2024-11-25 12:28:20.008590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.153 [2024-11-25 12:28:20.008684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:24.153 [2024-11-25 12:28:20.013744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e1f80 00:29:24.153 [2024-11-25 12:28:20.014371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.153 [2024-11-25 12:28:20.014394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:24.153 [2024-11-25 12:28:20.024179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f7970 00:29:24.153 [2024-11-25 12:28:20.025237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.153 [2024-11-25 12:28:20.025259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:24.411 [2024-11-25 12:28:20.032276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166ec840 00:29:24.411 [2024-11-25 12:28:20.033147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.411 [2024-11-25 12:28:20.033171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:24.411 [2024-11-25 12:28:20.041124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166ee190 00:29:24.411 [2024-11-25 12:28:20.042020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.411 [2024-11-25 12:28:20.042042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:24.411 [2024-11-25 12:28:20.051455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166fc560 00:29:24.411 [2024-11-25 12:28:20.052786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:17332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.411 [2024-11-25 12:28:20.052808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:24.411 [2024-11-25 12:28:20.057509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f2510 00:29:24.411 [2024-11-25 12:28:20.058155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.411 [2024-11-25 12:28:20.058178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:24.411 [2024-11-25 12:28:20.066162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166f5be8 00:29:24.411 [2024-11-25 12:28:20.066797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.411 [2024-11-25 12:28:20.066882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:24.411 [2024-11-25 12:28:20.076661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e8d30 00:29:24.411 [2024-11-25 12:28:20.077965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.411 [2024-11-25 12:28:20.077982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:24.411 [2024-11-25 12:28:20.082794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e4140 00:29:24.411 [2024-11-25 12:28:20.083327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:14755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.411 [2024-11-25 12:28:20.083350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:24.411 [2024-11-25 12:28:20.093004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b94570) with pdu=0x2000166e9168 00:29:24.411 [2024-11-25 12:28:20.094027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:24.411 [2024-11-25 12:28:20.094106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:24.411 29482.50 IOPS, 115.17 MiB/s 00:29:24.411 Latency(us) 00:29:24.411 [2024-11-25T12:28:20.287Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.411 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:24.411 nvme0n1 : 2.01 29474.02 115.13 0.00 0.00 4338.86 1739.22 16333.59 00:29:24.411 [2024-11-25T12:28:20.287Z] =================================================================================================================== 00:29:24.411 [2024-11-25T12:28:20.287Z] Total : 29474.02 115.13 0.00 0.00 4338.86 1739.22 16333.59 00:29:24.411 { 00:29:24.411 "results": [ 00:29:24.411 { 00:29:24.411 "job": "nvme0n1", 00:29:24.411 "core_mask": "0x2", 00:29:24.411 "workload": "randwrite", 00:29:24.411 "status": "finished", 00:29:24.411 "queue_depth": 128, 00:29:24.411 "io_size": 4096, 00:29:24.411 "runtime": 2.006072, 00:29:24.411 "iops": 29474.016884737935, 00:29:24.411 "mibps": 115.13287845600756, 00:29:24.411 "io_failed": 0, 00:29:24.411 "io_timeout": 0, 00:29:24.411 "avg_latency_us": 4338.860271306484, 00:29:24.411 "min_latency_us": 1739.2246153846154, 00:29:24.411 "max_latency_us": 16333.587692307692 00:29:24.411 } 00:29:24.411 ], 00:29:24.411 "core_count": 1 00:29:24.411 } 00:29:24.411 12:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:24.411 12:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:24.411 12:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:24.411 | .driver_specific 00:29:24.411 | .nvme_error 00:29:24.411 | .status_code 00:29:24.411 | .command_transient_transport_error' 00:29:24.411 12:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:24.670 12:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 231 > 0 )) 00:29:24.670 12:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93742 00:29:24.670 12:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 93742 ']' 00:29:24.670 12:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 93742 00:29:24.670 12:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:24.670 12:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:24.670 12:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93742 00:29:24.670 killing process with pid 93742 00:29:24.670 Received shutdown signal, test time was about 2.000000 seconds 00:29:24.670 00:29:24.670 Latency(us) 00:29:24.670 [2024-11-25T12:28:20.546Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.670 [2024-11-25T12:28:20.546Z] =================================================================================================================== 00:29:24.670 [2024-11-25T12:28:20.546Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:24.670 12:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:24.670 12:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:24.670 12:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93742' 00:29:24.670 12:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 93742 00:29:24.670 12:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 93742 00:29:24.670 12:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:24.670 12:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:24.670 12:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:24.670 12:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:24.670 12:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:24.670 12:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=93833 00:29:24.670 12:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 93833 /var/tmp/bperf.sock 00:29:24.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:24.670 12:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 93833 ']' 00:29:24.670 12:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:24.670 12:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:24.670 12:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:24.670 12:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:24.670 12:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:24.670 12:28:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:24.670 [2024-11-25 12:28:20.528158] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:29:24.670 [2024-11-25 12:28:20.528432] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-aI/O size of 131072 is greater than zero copy threshold (65536). 00:29:24.670 Zero copy mechanism will not be used. 00:29:24.670 llocations --file-prefix=spdk_pid93833 ] 00:29:24.927 [2024-11-25 12:28:20.655264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.927 [2024-11-25 12:28:20.693012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:25.492 12:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:25.492 12:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:25.492 12:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:25.492 12:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:25.749 12:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:25.749 12:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.749 12:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:25.749 12:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.749 12:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:25.749 12:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:26.007 nvme0n1 00:29:26.007 12:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:26.007 12:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.007 12:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:26.007 12:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.007 12:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:26.007 12:28:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:26.266 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:26.266 Zero copy mechanism will not be used. 00:29:26.266 Running I/O for 2 seconds... 00:29:26.266 [2024-11-25 12:28:21.922898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.266 [2024-11-25 12:28:21.922981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.266 [2024-11-25 12:28:21.923006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.266 [2024-11-25 12:28:21.926179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.266 [2024-11-25 12:28:21.926243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.266 [2024-11-25 12:28:21.926259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.266 [2024-11-25 12:28:21.929246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.266 [2024-11-25 12:28:21.929312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.266 [2024-11-25 12:28:21.929326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.266 [2024-11-25 12:28:21.932128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.266 [2024-11-25 12:28:21.932315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.266 [2024-11-25 12:28:21.932329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.266 [2024-11-25 12:28:21.935169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.266 [2024-11-25 12:28:21.935227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.266 [2024-11-25 12:28:21.935241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.266 [2024-11-25 12:28:21.938029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.266 [2024-11-25 12:28:21.938112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.266 [2024-11-25 12:28:21.938125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.266 [2024-11-25 12:28:21.940866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.266 [2024-11-25 12:28:21.940937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.266 [2024-11-25 12:28:21.940950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.266 [2024-11-25 12:28:21.943733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.267 [2024-11-25 12:28:21.943887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.267 [2024-11-25 12:28:21.943900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.267 [2024-11-25 12:28:21.946702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.267 [2024-11-25 12:28:21.946781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.267 [2024-11-25 12:28:21.946795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.267 [2024-11-25 12:28:21.949545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.267 [2024-11-25 12:28:21.949597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.267 [2024-11-25 12:28:21.949611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.267 [2024-11-25 12:28:21.952416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.267 [2024-11-25 12:28:21.952484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.267 [2024-11-25 12:28:21.952497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.267 [2024-11-25 12:28:21.955261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.267 [2024-11-25 12:28:21.955389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.267 [2024-11-25 12:28:21.955402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.267 [2024-11-25 12:28:21.958206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.267 [2024-11-25 12:28:21.958268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.267 [2024-11-25 12:28:21.958281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.267 [2024-11-25 12:28:21.961062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.267 [2024-11-25 12:28:21.961133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.267 [2024-11-25 12:28:21.961146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.267 [2024-11-25 12:28:21.963879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.267 [2024-11-25 12:28:21.963973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.267 [2024-11-25 12:28:21.963987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.267 [2024-11-25 12:28:21.966766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.267 [2024-11-25 12:28:21.966840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.267 [2024-11-25 12:28:21.966853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.267 [2024-11-25 12:28:21.969673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.267 [2024-11-25 12:28:21.969746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.267 [2024-11-25 12:28:21.969771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.267 [2024-11-25 12:28:21.972567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.267 [2024-11-25 12:28:21.972649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.267 [2024-11-25 12:28:21.972662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.267 [2024-11-25 12:28:21.975441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.267 [2024-11-25 12:28:21.975516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.267 [2024-11-25 12:28:21.975529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.267 [2024-11-25 12:28:21.978353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.267 [2024-11-25 12:28:21.978480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.267 [2024-11-25 12:28:21.978493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.267 [2024-11-25 12:28:21.981277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.267 [2024-11-25 12:28:21.981345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.267 [2024-11-25 12:28:21.981358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.267 [2024-11-25 12:28:21.984156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.267 [2024-11-25 12:28:21.984243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.267 [2024-11-25 12:28:21.984256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.267 [2024-11-25 12:28:21.987094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.267 [2024-11-25 12:28:21.987163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.267 [2024-11-25 12:28:21.987176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.267 [2024-11-25 12:28:21.989966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.267 [2024-11-25 12:28:21.990022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.267 [2024-11-25 12:28:21.990035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.267 [2024-11-25 12:28:21.992829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.267 [2024-11-25 12:28:21.992908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.267 [2024-11-25 12:28:21.992921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.267 [2024-11-25 12:28:21.995662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.267 [2024-11-25 12:28:21.995742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.267 [2024-11-25 12:28:21.995767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.267 [2024-11-25 12:28:21.998557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.267 [2024-11-25 12:28:21.998613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.267 [2024-11-25 12:28:21.998626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.267 [2024-11-25 12:28:22.001444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.267 [2024-11-25 12:28:22.001574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.267 [2024-11-25 12:28:22.001587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.267 [2024-11-25 12:28:22.004389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.267 [2024-11-25 12:28:22.004474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.267 [2024-11-25 12:28:22.004487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.267 [2024-11-25 12:28:22.007245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.267 [2024-11-25 12:28:22.007316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.267 [2024-11-25 12:28:22.007329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.267 [2024-11-25 12:28:22.010060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.267 [2024-11-25 12:28:22.010132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.267 [2024-11-25 12:28:22.010153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.267 [2024-11-25 12:28:22.012906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.267 [2024-11-25 12:28:22.012978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.267 [2024-11-25 12:28:22.012991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.267 [2024-11-25 12:28:22.015776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.267 [2024-11-25 12:28:22.015833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.267 [2024-11-25 12:28:22.015846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.267 [2024-11-25 12:28:22.018670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.267 [2024-11-25 12:28:22.018727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.267 [2024-11-25 12:28:22.018740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.267 [2024-11-25 12:28:22.021568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.267 [2024-11-25 12:28:22.021625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.268 [2024-11-25 12:28:22.021639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.268 [2024-11-25 12:28:22.024443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.268 [2024-11-25 12:28:22.024575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.268 [2024-11-25 12:28:22.024588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.268 [2024-11-25 12:28:22.027426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.268 [2024-11-25 12:28:22.027479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.268 [2024-11-25 12:28:22.027492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.268 [2024-11-25 12:28:22.030272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.268 [2024-11-25 12:28:22.030354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.268 [2024-11-25 12:28:22.030367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.268 [2024-11-25 12:28:22.033126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.268 [2024-11-25 12:28:22.033199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.268 [2024-11-25 12:28:22.033212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.268 [2024-11-25 12:28:22.036004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.268 [2024-11-25 12:28:22.036063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.268 [2024-11-25 12:28:22.036076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.268 [2024-11-25 12:28:22.038874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.268 [2024-11-25 12:28:22.038932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.268 [2024-11-25 12:28:22.038945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.268 [2024-11-25 12:28:22.041684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.268 [2024-11-25 12:28:22.041761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.268 [2024-11-25 12:28:22.041775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.268 [2024-11-25 12:28:22.044566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.268 [2024-11-25 12:28:22.044621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.268 [2024-11-25 12:28:22.044633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.268 [2024-11-25 12:28:22.047397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.268 [2024-11-25 12:28:22.047512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.268 [2024-11-25 12:28:22.047526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.268 [2024-11-25 12:28:22.050341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.268 [2024-11-25 12:28:22.050404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.268 [2024-11-25 12:28:22.050418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.268 [2024-11-25 12:28:22.053193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.268 [2024-11-25 12:28:22.053251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.268 [2024-11-25 12:28:22.053264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.268 [2024-11-25 12:28:22.056113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.268 [2024-11-25 12:28:22.056195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.268 [2024-11-25 12:28:22.056208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.268 [2024-11-25 12:28:22.059011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.268 [2024-11-25 12:28:22.059068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.268 [2024-11-25 12:28:22.059081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.268 [2024-11-25 12:28:22.061848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.268 [2024-11-25 12:28:22.061922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.268 [2024-11-25 12:28:22.061935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.268 [2024-11-25 12:28:22.064671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.268 [2024-11-25 12:28:22.064743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.268 [2024-11-25 12:28:22.064769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.268 [2024-11-25 12:28:22.067556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.268 [2024-11-25 12:28:22.067628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.268 [2024-11-25 12:28:22.067641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.268 [2024-11-25 12:28:22.070399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.268 [2024-11-25 12:28:22.070534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.268 [2024-11-25 12:28:22.070547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.268 [2024-11-25 12:28:22.073356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.268 [2024-11-25 12:28:22.073449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.268 [2024-11-25 12:28:22.073463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.268 [2024-11-25 12:28:22.076213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.268 [2024-11-25 12:28:22.076282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.268 [2024-11-25 12:28:22.076295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.268 [2024-11-25 12:28:22.079086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.268 [2024-11-25 12:28:22.079139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.268 [2024-11-25 12:28:22.079152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.268 [2024-11-25 12:28:22.081969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.268 [2024-11-25 12:28:22.082040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.268 [2024-11-25 12:28:22.082060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.268 [2024-11-25 12:28:22.084848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.268 [2024-11-25 12:28:22.084921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.268 [2024-11-25 12:28:22.084934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.268 [2024-11-25 12:28:22.087647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.268 [2024-11-25 12:28:22.087729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.268 [2024-11-25 12:28:22.087742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.268 [2024-11-25 12:28:22.090494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.268 [2024-11-25 12:28:22.090606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.268 [2024-11-25 12:28:22.090620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.268 [2024-11-25 12:28:22.093456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.268 [2024-11-25 12:28:22.093513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.268 [2024-11-25 12:28:22.093526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.268 [2024-11-25 12:28:22.096306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.268 [2024-11-25 12:28:22.096381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.268 [2024-11-25 12:28:22.096394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.268 [2024-11-25 12:28:22.099150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.268 [2024-11-25 12:28:22.099218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.269 [2024-11-25 12:28:22.099232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.269 [2024-11-25 12:28:22.101971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.269 [2024-11-25 12:28:22.102041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.269 [2024-11-25 12:28:22.102054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.269 [2024-11-25 12:28:22.104796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.269 [2024-11-25 12:28:22.104883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.269 [2024-11-25 12:28:22.104897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.269 [2024-11-25 12:28:22.107664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.269 [2024-11-25 12:28:22.107723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.269 [2024-11-25 12:28:22.107736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.269 [2024-11-25 12:28:22.110536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.269 [2024-11-25 12:28:22.110592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.269 [2024-11-25 12:28:22.110605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.269 [2024-11-25 12:28:22.113375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.269 [2024-11-25 12:28:22.113498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.269 [2024-11-25 12:28:22.113511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.269 [2024-11-25 12:28:22.116342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.269 [2024-11-25 12:28:22.116399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.269 [2024-11-25 12:28:22.116412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.269 [2024-11-25 12:28:22.119190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.269 [2024-11-25 12:28:22.119261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.269 [2024-11-25 12:28:22.119274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.269 [2024-11-25 12:28:22.122029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.269 [2024-11-25 12:28:22.122099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.269 [2024-11-25 12:28:22.122113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.269 [2024-11-25 12:28:22.124869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.269 [2024-11-25 12:28:22.124938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.269 [2024-11-25 12:28:22.124951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.269 [2024-11-25 12:28:22.127784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.269 [2024-11-25 12:28:22.127866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.269 [2024-11-25 12:28:22.127880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.269 [2024-11-25 12:28:22.130665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.269 [2024-11-25 12:28:22.130730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.269 [2024-11-25 12:28:22.130743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.269 [2024-11-25 12:28:22.133539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.269 [2024-11-25 12:28:22.133589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.269 [2024-11-25 12:28:22.133602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.269 [2024-11-25 12:28:22.136445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.269 [2024-11-25 12:28:22.136571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.269 [2024-11-25 12:28:22.136584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.269 [2024-11-25 12:28:22.139368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.269 [2024-11-25 12:28:22.139438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.269 [2024-11-25 12:28:22.139451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.529 [2024-11-25 12:28:22.142220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.529 [2024-11-25 12:28:22.142282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.529 [2024-11-25 12:28:22.142295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.529 [2024-11-25 12:28:22.145097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.529 [2024-11-25 12:28:22.145159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.529 [2024-11-25 12:28:22.145172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.529 [2024-11-25 12:28:22.148052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.529 [2024-11-25 12:28:22.148177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.529 [2024-11-25 12:28:22.148190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.529 [2024-11-25 12:28:22.151064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.530 [2024-11-25 12:28:22.151136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.530 [2024-11-25 12:28:22.151150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.530 [2024-11-25 12:28:22.153976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.530 [2024-11-25 12:28:22.154031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.530 [2024-11-25 12:28:22.154045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.530 [2024-11-25 12:28:22.156921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.530 [2024-11-25 12:28:22.156980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.530 [2024-11-25 12:28:22.156993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.530 [2024-11-25 12:28:22.159789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.530 [2024-11-25 12:28:22.159842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.530 [2024-11-25 12:28:22.159855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.530 [2024-11-25 12:28:22.162647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.530 [2024-11-25 12:28:22.162717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.530 [2024-11-25 12:28:22.162730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.530 [2024-11-25 12:28:22.165528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.530 [2024-11-25 12:28:22.165585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.530 [2024-11-25 12:28:22.165598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.530 [2024-11-25 12:28:22.168376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.530 [2024-11-25 12:28:22.168447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.530 [2024-11-25 12:28:22.168460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.530 [2024-11-25 12:28:22.171187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.530 [2024-11-25 12:28:22.171313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.530 [2024-11-25 12:28:22.171327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.530 [2024-11-25 12:28:22.174082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.530 [2024-11-25 12:28:22.174169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.530 [2024-11-25 12:28:22.174182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.530 [2024-11-25 12:28:22.176961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.530 [2024-11-25 12:28:22.177030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.530 [2024-11-25 12:28:22.177043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.530 [2024-11-25 12:28:22.179786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.530 [2024-11-25 12:28:22.179859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.530 [2024-11-25 12:28:22.179872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.530 [2024-11-25 12:28:22.182616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.530 [2024-11-25 12:28:22.182741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.530 [2024-11-25 12:28:22.182766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.530 [2024-11-25 12:28:22.185566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.530 [2024-11-25 12:28:22.185621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.530 [2024-11-25 12:28:22.185634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.530 [2024-11-25 12:28:22.188405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.530 [2024-11-25 12:28:22.188471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.530 [2024-11-25 12:28:22.188485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.530 [2024-11-25 12:28:22.191275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.530 [2024-11-25 12:28:22.191344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.530 [2024-11-25 12:28:22.191357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.530 [2024-11-25 12:28:22.194095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.530 [2024-11-25 12:28:22.194226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.530 [2024-11-25 12:28:22.194239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.530 [2024-11-25 12:28:22.196981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.530 [2024-11-25 12:28:22.197037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.530 [2024-11-25 12:28:22.197051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.530 [2024-11-25 12:28:22.199820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.530 [2024-11-25 12:28:22.199876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.530 [2024-11-25 12:28:22.199890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.530 [2024-11-25 12:28:22.202636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.530 [2024-11-25 12:28:22.202709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.530 [2024-11-25 12:28:22.202721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.530 [2024-11-25 12:28:22.205432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.530 [2024-11-25 12:28:22.205559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.530 [2024-11-25 12:28:22.205572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.530 [2024-11-25 12:28:22.208339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.530 [2024-11-25 12:28:22.208410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.530 [2024-11-25 12:28:22.208423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.530 [2024-11-25 12:28:22.211200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.530 [2024-11-25 12:28:22.211257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.530 [2024-11-25 12:28:22.211270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.530 [2024-11-25 12:28:22.214039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.530 [2024-11-25 12:28:22.214095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.530 [2024-11-25 12:28:22.214109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.530 [2024-11-25 12:28:22.216916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.530 [2024-11-25 12:28:22.216970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.530 [2024-11-25 12:28:22.216983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.530 [2024-11-25 12:28:22.219767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.530 [2024-11-25 12:28:22.219825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.530 [2024-11-25 12:28:22.219838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.530 [2024-11-25 12:28:22.222572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.530 [2024-11-25 12:28:22.222640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.530 [2024-11-25 12:28:22.222653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.530 [2024-11-25 12:28:22.225428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.530 [2024-11-25 12:28:22.225499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.530 [2024-11-25 12:28:22.225511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.530 [2024-11-25 12:28:22.228267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.530 [2024-11-25 12:28:22.228392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.530 [2024-11-25 12:28:22.228405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.530 [2024-11-25 12:28:22.231218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.530 [2024-11-25 12:28:22.231292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.530 [2024-11-25 12:28:22.231305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.530 [2024-11-25 12:28:22.234051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.530 [2024-11-25 12:28:22.234121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.530 [2024-11-25 12:28:22.234134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.530 [2024-11-25 12:28:22.236945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.530 [2024-11-25 12:28:22.237016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.530 [2024-11-25 12:28:22.237029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.530 [2024-11-25 12:28:22.239738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.530 [2024-11-25 12:28:22.239874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.530 [2024-11-25 12:28:22.239888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.530 [2024-11-25 12:28:22.242682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.530 [2024-11-25 12:28:22.242766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.530 [2024-11-25 12:28:22.242779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.530 [2024-11-25 12:28:22.245533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.530 [2024-11-25 12:28:22.245586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.530 [2024-11-25 12:28:22.245599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.530 [2024-11-25 12:28:22.248407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.530 [2024-11-25 12:28:22.248460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.530 [2024-11-25 12:28:22.248473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.530 [2024-11-25 12:28:22.251335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.530 [2024-11-25 12:28:22.251462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.530 [2024-11-25 12:28:22.251475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.530 [2024-11-25 12:28:22.254300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.530 [2024-11-25 12:28:22.254361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.530 [2024-11-25 12:28:22.254374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.530 [2024-11-25 12:28:22.257113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.530 [2024-11-25 12:28:22.257184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.530 [2024-11-25 12:28:22.257197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.531 [2024-11-25 12:28:22.259958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.531 [2024-11-25 12:28:22.260016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.531 [2024-11-25 12:28:22.260029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.531 [2024-11-25 12:28:22.262807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.531 [2024-11-25 12:28:22.262878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.531 [2024-11-25 12:28:22.262891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.531 [2024-11-25 12:28:22.265613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.531 [2024-11-25 12:28:22.265683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.531 [2024-11-25 12:28:22.265695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.531 [2024-11-25 12:28:22.268450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.531 [2024-11-25 12:28:22.268532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.531 [2024-11-25 12:28:22.268545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.531 [2024-11-25 12:28:22.271326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.531 [2024-11-25 12:28:22.271381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.531 [2024-11-25 12:28:22.271394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.531 [2024-11-25 12:28:22.274239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.531 [2024-11-25 12:28:22.274361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.531 [2024-11-25 12:28:22.274374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.531 [2024-11-25 12:28:22.277178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.531 [2024-11-25 12:28:22.277248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.531 [2024-11-25 12:28:22.277261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.531 [2024-11-25 12:28:22.280026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.531 [2024-11-25 12:28:22.280098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.531 [2024-11-25 12:28:22.280110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.531 [2024-11-25 12:28:22.282889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.531 [2024-11-25 12:28:22.282938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.531 [2024-11-25 12:28:22.282950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.531 [2024-11-25 12:28:22.285764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.531 [2024-11-25 12:28:22.285828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.531 [2024-11-25 12:28:22.285841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.531 [2024-11-25 12:28:22.288603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.531 [2024-11-25 12:28:22.288660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.531 [2024-11-25 12:28:22.288674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.531 [2024-11-25 12:28:22.291546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.531 [2024-11-25 12:28:22.291602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.531 [2024-11-25 12:28:22.291616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.531 [2024-11-25 12:28:22.294399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.531 [2024-11-25 12:28:22.294482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.531 [2024-11-25 12:28:22.294495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.531 [2024-11-25 12:28:22.297234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.531 [2024-11-25 12:28:22.297359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.531 [2024-11-25 12:28:22.297373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.531 [2024-11-25 12:28:22.300177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.531 [2024-11-25 12:28:22.300234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.531 [2024-11-25 12:28:22.300248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.531 [2024-11-25 12:28:22.303006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.531 [2024-11-25 12:28:22.303068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.531 [2024-11-25 12:28:22.303082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.531 [2024-11-25 12:28:22.305855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.531 [2024-11-25 12:28:22.305921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.531 [2024-11-25 12:28:22.305934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.531 [2024-11-25 12:28:22.308649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.531 [2024-11-25 12:28:22.308786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.531 [2024-11-25 12:28:22.308800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.531 [2024-11-25 12:28:22.311576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.531 [2024-11-25 12:28:22.311649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.531 [2024-11-25 12:28:22.311662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.531 [2024-11-25 12:28:22.314457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.531 [2024-11-25 12:28:22.314522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.531 [2024-11-25 12:28:22.314535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.531 [2024-11-25 12:28:22.317328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.531 [2024-11-25 12:28:22.317382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.531 [2024-11-25 12:28:22.317395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.531 [2024-11-25 12:28:22.320168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.531 [2024-11-25 12:28:22.320294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.531 [2024-11-25 12:28:22.320307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.531 [2024-11-25 12:28:22.323109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.531 [2024-11-25 12:28:22.323184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.531 [2024-11-25 12:28:22.323197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.531 [2024-11-25 12:28:22.325979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.531 [2024-11-25 12:28:22.326042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.531 [2024-11-25 12:28:22.326054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.531 [2024-11-25 12:28:22.328833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.531 [2024-11-25 12:28:22.328891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.531 [2024-11-25 12:28:22.328904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.531 [2024-11-25 12:28:22.331686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.531 [2024-11-25 12:28:22.331828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.531 [2024-11-25 12:28:22.331841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.531 [2024-11-25 12:28:22.334667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.531 [2024-11-25 12:28:22.334737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.531 [2024-11-25 12:28:22.334761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.531 [2024-11-25 12:28:22.337508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.531 [2024-11-25 12:28:22.337562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.531 [2024-11-25 12:28:22.337575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.531 [2024-11-25 12:28:22.340395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.531 [2024-11-25 12:28:22.340451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.531 [2024-11-25 12:28:22.340465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.531 [2024-11-25 12:28:22.343239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.531 [2024-11-25 12:28:22.343368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.531 [2024-11-25 12:28:22.343381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.531 [2024-11-25 12:28:22.346191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.531 [2024-11-25 12:28:22.346264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.531 [2024-11-25 12:28:22.346277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.531 [2024-11-25 12:28:22.348993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.531 [2024-11-25 12:28:22.349064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.531 [2024-11-25 12:28:22.349077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.531 [2024-11-25 12:28:22.351817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.531 [2024-11-25 12:28:22.351889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.531 [2024-11-25 12:28:22.351902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.531 [2024-11-25 12:28:22.354640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.531 [2024-11-25 12:28:22.354779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.531 [2024-11-25 12:28:22.354792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.531 [2024-11-25 12:28:22.357574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.531 [2024-11-25 12:28:22.357644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.531 [2024-11-25 12:28:22.357657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.531 [2024-11-25 12:28:22.360460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.531 [2024-11-25 12:28:22.360514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.531 [2024-11-25 12:28:22.360527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.531 [2024-11-25 12:28:22.363318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.531 [2024-11-25 12:28:22.363375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.531 [2024-11-25 12:28:22.363389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.531 [2024-11-25 12:28:22.366146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.531 [2024-11-25 12:28:22.366271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.531 [2024-11-25 12:28:22.366284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.532 [2024-11-25 12:28:22.369034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.532 [2024-11-25 12:28:22.369104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.532 [2024-11-25 12:28:22.369116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.532 [2024-11-25 12:28:22.371879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.532 [2024-11-25 12:28:22.371938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.532 [2024-11-25 12:28:22.371952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.532 [2024-11-25 12:28:22.374715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.532 [2024-11-25 12:28:22.374811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.532 [2024-11-25 12:28:22.374824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.532 [2024-11-25 12:28:22.377549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.532 [2024-11-25 12:28:22.377673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.532 [2024-11-25 12:28:22.377686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.532 [2024-11-25 12:28:22.380525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.532 [2024-11-25 12:28:22.380579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.532 [2024-11-25 12:28:22.380592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.532 [2024-11-25 12:28:22.383413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.532 [2024-11-25 12:28:22.383468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.532 [2024-11-25 12:28:22.383481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.532 [2024-11-25 12:28:22.386244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.532 [2024-11-25 12:28:22.386316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.532 [2024-11-25 12:28:22.386329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.532 [2024-11-25 12:28:22.389066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.532 [2024-11-25 12:28:22.389189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.532 [2024-11-25 12:28:22.389202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.532 [2024-11-25 12:28:22.392018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.532 [2024-11-25 12:28:22.392075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.532 [2024-11-25 12:28:22.392088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.532 [2024-11-25 12:28:22.394851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.532 [2024-11-25 12:28:22.394942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.532 [2024-11-25 12:28:22.394955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.532 [2024-11-25 12:28:22.397643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.532 [2024-11-25 12:28:22.397713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.532 [2024-11-25 12:28:22.397726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.532 [2024-11-25 12:28:22.400550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.532 [2024-11-25 12:28:22.400675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.532 [2024-11-25 12:28:22.400688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.793 [2024-11-25 12:28:22.403451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.793 [2024-11-25 12:28:22.403521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.793 [2024-11-25 12:28:22.403534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.793 [2024-11-25 12:28:22.406326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.793 [2024-11-25 12:28:22.406381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.793 [2024-11-25 12:28:22.406394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.793 [2024-11-25 12:28:22.409143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.793 [2024-11-25 12:28:22.409215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.793 [2024-11-25 12:28:22.409228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.793 [2024-11-25 12:28:22.412035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.793 [2024-11-25 12:28:22.412157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.793 [2024-11-25 12:28:22.412169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.793 [2024-11-25 12:28:22.415001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.793 [2024-11-25 12:28:22.415068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.793 [2024-11-25 12:28:22.415081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.793 [2024-11-25 12:28:22.417799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.793 [2024-11-25 12:28:22.417873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.793 [2024-11-25 12:28:22.417886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.793 [2024-11-25 12:28:22.420627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.793 [2024-11-25 12:28:22.420698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.793 [2024-11-25 12:28:22.420712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.793 [2024-11-25 12:28:22.423495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.793 [2024-11-25 12:28:22.423621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.793 [2024-11-25 12:28:22.423634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.793 [2024-11-25 12:28:22.426460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.793 [2024-11-25 12:28:22.426518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.793 [2024-11-25 12:28:22.426531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.793 [2024-11-25 12:28:22.429313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.793 [2024-11-25 12:28:22.429374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.793 [2024-11-25 12:28:22.429387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.793 [2024-11-25 12:28:22.432157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.793 [2024-11-25 12:28:22.432228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.793 [2024-11-25 12:28:22.432241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.793 [2024-11-25 12:28:22.434963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.794 [2024-11-25 12:28:22.435033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.794 [2024-11-25 12:28:22.435046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.794 [2024-11-25 12:28:22.437864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.794 [2024-11-25 12:28:22.437917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.794 [2024-11-25 12:28:22.437930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.794 [2024-11-25 12:28:22.440364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.794 [2024-11-25 12:28:22.440614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.794 [2024-11-25 12:28:22.440634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.794 [2024-11-25 12:28:22.443053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.794 [2024-11-25 12:28:22.443307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.794 [2024-11-25 12:28:22.443326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.794 [2024-11-25 12:28:22.445724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.794 [2024-11-25 12:28:22.445985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.794 [2024-11-25 12:28:22.446004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.794 [2024-11-25 12:28:22.448424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.794 [2024-11-25 12:28:22.448676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.794 [2024-11-25 12:28:22.448695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.794 [2024-11-25 12:28:22.451111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.794 [2024-11-25 12:28:22.451366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.794 [2024-11-25 12:28:22.451383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.794 [2024-11-25 12:28:22.453795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.794 [2024-11-25 12:28:22.454049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.794 [2024-11-25 12:28:22.454068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.794 [2024-11-25 12:28:22.456499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.794 [2024-11-25 12:28:22.456768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.794 [2024-11-25 12:28:22.456786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.794 [2024-11-25 12:28:22.459193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.794 [2024-11-25 12:28:22.459442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.794 [2024-11-25 12:28:22.459461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.794 [2024-11-25 12:28:22.461883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.794 [2024-11-25 12:28:22.462150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.794 [2024-11-25 12:28:22.462168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.794 [2024-11-25 12:28:22.464552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.794 [2024-11-25 12:28:22.464818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.794 [2024-11-25 12:28:22.464836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.794 [2024-11-25 12:28:22.467247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.794 [2024-11-25 12:28:22.467496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.794 [2024-11-25 12:28:22.467515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.794 [2024-11-25 12:28:22.469949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.794 [2024-11-25 12:28:22.470216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.794 [2024-11-25 12:28:22.470233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.794 [2024-11-25 12:28:22.472648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.794 [2024-11-25 12:28:22.472913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.794 [2024-11-25 12:28:22.472932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.794 [2024-11-25 12:28:22.475343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.794 [2024-11-25 12:28:22.475590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.794 [2024-11-25 12:28:22.475609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.794 [2024-11-25 12:28:22.478035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.794 [2024-11-25 12:28:22.478293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.794 [2024-11-25 12:28:22.478311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.794 [2024-11-25 12:28:22.480720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.794 [2024-11-25 12:28:22.480984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.794 [2024-11-25 12:28:22.481004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.794 [2024-11-25 12:28:22.483400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.794 [2024-11-25 12:28:22.483650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.794 [2024-11-25 12:28:22.483668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.794 [2024-11-25 12:28:22.486092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.794 [2024-11-25 12:28:22.486361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.794 [2024-11-25 12:28:22.486380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.794 [2024-11-25 12:28:22.488781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.794 [2024-11-25 12:28:22.489035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.794 [2024-11-25 12:28:22.489053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.794 [2024-11-25 12:28:22.491444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.794 [2024-11-25 12:28:22.491692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.794 [2024-11-25 12:28:22.491710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.794 [2024-11-25 12:28:22.494131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.794 [2024-11-25 12:28:22.494395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.794 [2024-11-25 12:28:22.494413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.794 [2024-11-25 12:28:22.496826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.794 [2024-11-25 12:28:22.497077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.794 [2024-11-25 12:28:22.497095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.794 [2024-11-25 12:28:22.499500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.794 [2024-11-25 12:28:22.499790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.794 [2024-11-25 12:28:22.499808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.794 [2024-11-25 12:28:22.502226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.794 [2024-11-25 12:28:22.502501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.794 [2024-11-25 12:28:22.502521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.794 [2024-11-25 12:28:22.504916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.794 [2024-11-25 12:28:22.505165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.794 [2024-11-25 12:28:22.505184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.794 [2024-11-25 12:28:22.507581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.794 [2024-11-25 12:28:22.507843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.795 [2024-11-25 12:28:22.507861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.795 [2024-11-25 12:28:22.510302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.795 [2024-11-25 12:28:22.510555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.795 [2024-11-25 12:28:22.510573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.795 [2024-11-25 12:28:22.512942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.795 [2024-11-25 12:28:22.513189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.795 [2024-11-25 12:28:22.513208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.795 [2024-11-25 12:28:22.515617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.795 [2024-11-25 12:28:22.515881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.795 [2024-11-25 12:28:22.515899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.795 [2024-11-25 12:28:22.518312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.795 [2024-11-25 12:28:22.518560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.795 [2024-11-25 12:28:22.518579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.795 [2024-11-25 12:28:22.520996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.795 [2024-11-25 12:28:22.521239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.795 [2024-11-25 12:28:22.521258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.795 [2024-11-25 12:28:22.523666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.795 [2024-11-25 12:28:22.523950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.795 [2024-11-25 12:28:22.523969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.795 [2024-11-25 12:28:22.526415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.795 [2024-11-25 12:28:22.526671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.795 [2024-11-25 12:28:22.526689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.795 [2024-11-25 12:28:22.529125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.795 [2024-11-25 12:28:22.529371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.795 [2024-11-25 12:28:22.529389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.795 [2024-11-25 12:28:22.531803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.795 [2024-11-25 12:28:22.532051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.795 [2024-11-25 12:28:22.532070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.795 [2024-11-25 12:28:22.534474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.795 [2024-11-25 12:28:22.534723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.795 [2024-11-25 12:28:22.534742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.795 [2024-11-25 12:28:22.537179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.795 [2024-11-25 12:28:22.537431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.795 [2024-11-25 12:28:22.537449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.795 [2024-11-25 12:28:22.539896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.795 [2024-11-25 12:28:22.540145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.795 [2024-11-25 12:28:22.540163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.795 [2024-11-25 12:28:22.542574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.795 [2024-11-25 12:28:22.542842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.795 [2024-11-25 12:28:22.542860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.795 [2024-11-25 12:28:22.545253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.795 [2024-11-25 12:28:22.545504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.795 [2024-11-25 12:28:22.545522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.795 [2024-11-25 12:28:22.547921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.795 [2024-11-25 12:28:22.548172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.795 [2024-11-25 12:28:22.548190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.795 [2024-11-25 12:28:22.550609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.795 [2024-11-25 12:28:22.550877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.795 [2024-11-25 12:28:22.550896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.795 [2024-11-25 12:28:22.553223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.795 [2024-11-25 12:28:22.553434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.795 [2024-11-25 12:28:22.553452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.795 [2024-11-25 12:28:22.555676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.795 [2024-11-25 12:28:22.555809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.795 [2024-11-25 12:28:22.555824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.795 [2024-11-25 12:28:22.558227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.795 [2024-11-25 12:28:22.558374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.795 [2024-11-25 12:28:22.558392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.795 [2024-11-25 12:28:22.560621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.795 [2024-11-25 12:28:22.560785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.795 [2024-11-25 12:28:22.560803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.795 [2024-11-25 12:28:22.563149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.795 [2024-11-25 12:28:22.563270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.795 [2024-11-25 12:28:22.563284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.795 [2024-11-25 12:28:22.565697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.795 [2024-11-25 12:28:22.565888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.795 [2024-11-25 12:28:22.566005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.795 [2024-11-25 12:28:22.568194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.795 [2024-11-25 12:28:22.568321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.795 [2024-11-25 12:28:22.568455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.795 [2024-11-25 12:28:22.570730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.795 [2024-11-25 12:28:22.570857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.795 [2024-11-25 12:28:22.570939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.795 [2024-11-25 12:28:22.573237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.795 [2024-11-25 12:28:22.573383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.795 [2024-11-25 12:28:22.573466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.795 [2024-11-25 12:28:22.575742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.795 [2024-11-25 12:28:22.575863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.795 [2024-11-25 12:28:22.575948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.795 [2024-11-25 12:28:22.578265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.795 [2024-11-25 12:28:22.578379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.795 [2024-11-25 12:28:22.578505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.796 [2024-11-25 12:28:22.580762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.796 [2024-11-25 12:28:22.580904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.796 [2024-11-25 12:28:22.580990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.796 [2024-11-25 12:28:22.583269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.796 [2024-11-25 12:28:22.583393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.796 [2024-11-25 12:28:22.583484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.796 [2024-11-25 12:28:22.585811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.796 [2024-11-25 12:28:22.585938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.796 [2024-11-25 12:28:22.586025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.796 [2024-11-25 12:28:22.588369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.796 [2024-11-25 12:28:22.588492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.796 [2024-11-25 12:28:22.588589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.796 [2024-11-25 12:28:22.590890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.796 [2024-11-25 12:28:22.591034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.796 [2024-11-25 12:28:22.591137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.796 [2024-11-25 12:28:22.593377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.796 [2024-11-25 12:28:22.593504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.796 [2024-11-25 12:28:22.593615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.796 [2024-11-25 12:28:22.595920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.796 [2024-11-25 12:28:22.596055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.796 [2024-11-25 12:28:22.596075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.796 [2024-11-25 12:28:22.598434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.796 [2024-11-25 12:28:22.598554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.796 [2024-11-25 12:28:22.598568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.796 [2024-11-25 12:28:22.601012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.796 [2024-11-25 12:28:22.601121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.796 [2024-11-25 12:28:22.601134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.796 [2024-11-25 12:28:22.603504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.796 [2024-11-25 12:28:22.603606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.796 [2024-11-25 12:28:22.603626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.796 [2024-11-25 12:28:22.605985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.796 [2024-11-25 12:28:22.606102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.796 [2024-11-25 12:28:22.606121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.796 [2024-11-25 12:28:22.608420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.796 [2024-11-25 12:28:22.608544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.796 [2024-11-25 12:28:22.608563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.796 [2024-11-25 12:28:22.610855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.796 [2024-11-25 12:28:22.610987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.796 [2024-11-25 12:28:22.611006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.796 [2024-11-25 12:28:22.613256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.796 [2024-11-25 12:28:22.613375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.796 [2024-11-25 12:28:22.613394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.796 [2024-11-25 12:28:22.615698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.796 [2024-11-25 12:28:22.615815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.796 [2024-11-25 12:28:22.615834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.796 [2024-11-25 12:28:22.618154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.796 [2024-11-25 12:28:22.618271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.796 [2024-11-25 12:28:22.618291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.796 [2024-11-25 12:28:22.620571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.796 [2024-11-25 12:28:22.620676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.796 [2024-11-25 12:28:22.620695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.796 [2024-11-25 12:28:22.623050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.796 [2024-11-25 12:28:22.623164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.796 [2024-11-25 12:28:22.623183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.796 [2024-11-25 12:28:22.625532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.796 [2024-11-25 12:28:22.625635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.796 [2024-11-25 12:28:22.625648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.796 [2024-11-25 12:28:22.628098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.796 [2024-11-25 12:28:22.628212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.796 [2024-11-25 12:28:22.628232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.796 [2024-11-25 12:28:22.630534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.796 [2024-11-25 12:28:22.630627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.796 [2024-11-25 12:28:22.630647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.796 [2024-11-25 12:28:22.632953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.796 [2024-11-25 12:28:22.633079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.796 [2024-11-25 12:28:22.633100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.796 [2024-11-25 12:28:22.635392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.796 [2024-11-25 12:28:22.635489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.796 [2024-11-25 12:28:22.635508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.796 [2024-11-25 12:28:22.637791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.796 [2024-11-25 12:28:22.637899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.796 [2024-11-25 12:28:22.637918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.796 [2024-11-25 12:28:22.640210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.796 [2024-11-25 12:28:22.640328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.796 [2024-11-25 12:28:22.640347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.796 [2024-11-25 12:28:22.642639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.796 [2024-11-25 12:28:22.642738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.796 [2024-11-25 12:28:22.642763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.796 [2024-11-25 12:28:22.645042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.796 [2024-11-25 12:28:22.645132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.796 [2024-11-25 12:28:22.645145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.796 [2024-11-25 12:28:22.647497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.797 [2024-11-25 12:28:22.647617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.797 [2024-11-25 12:28:22.647636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.797 [2024-11-25 12:28:22.649903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.797 [2024-11-25 12:28:22.650027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.797 [2024-11-25 12:28:22.650045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.797 [2024-11-25 12:28:22.652289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.797 [2024-11-25 12:28:22.652419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.797 [2024-11-25 12:28:22.652433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:26.797 [2024-11-25 12:28:22.654794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.797 [2024-11-25 12:28:22.654897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.797 [2024-11-25 12:28:22.654917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:26.797 [2024-11-25 12:28:22.657175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.797 [2024-11-25 12:28:22.657313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.797 [2024-11-25 12:28:22.657332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:26.797 [2024-11-25 12:28:22.659604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.797 [2024-11-25 12:28:22.659724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.797 [2024-11-25 12:28:22.659743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:26.797 [2024-11-25 12:28:22.662019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:26.797 [2024-11-25 12:28:22.662168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.797 [2024-11-25 12:28:22.662185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.059 [2024-11-25 12:28:22.664424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.059 [2024-11-25 12:28:22.664551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.059 [2024-11-25 12:28:22.664565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.059 [2024-11-25 12:28:22.666966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.059 [2024-11-25 12:28:22.667067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.059 [2024-11-25 12:28:22.667080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.059 [2024-11-25 12:28:22.669380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.059 [2024-11-25 12:28:22.669492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.059 [2024-11-25 12:28:22.669511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.059 [2024-11-25 12:28:22.671806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.059 [2024-11-25 12:28:22.671912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.059 [2024-11-25 12:28:22.671930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.059 [2024-11-25 12:28:22.674201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.059 [2024-11-25 12:28:22.674299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.059 [2024-11-25 12:28:22.674318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.059 [2024-11-25 12:28:22.676600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.059 [2024-11-25 12:28:22.676709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.059 [2024-11-25 12:28:22.676722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.059 [2024-11-25 12:28:22.679094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.059 [2024-11-25 12:28:22.679201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.059 [2024-11-25 12:28:22.679221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.059 [2024-11-25 12:28:22.681495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.059 [2024-11-25 12:28:22.681616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.059 [2024-11-25 12:28:22.681634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.059 [2024-11-25 12:28:22.683931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.059 [2024-11-25 12:28:22.684038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.059 [2024-11-25 12:28:22.684057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.059 [2024-11-25 12:28:22.686351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.059 [2024-11-25 12:28:22.686465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.060 [2024-11-25 12:28:22.686484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.060 [2024-11-25 12:28:22.688723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.060 [2024-11-25 12:28:22.688864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.060 [2024-11-25 12:28:22.688878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.060 [2024-11-25 12:28:22.691220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.060 [2024-11-25 12:28:22.691322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.060 [2024-11-25 12:28:22.691341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.060 [2024-11-25 12:28:22.693658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.060 [2024-11-25 12:28:22.693768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.060 [2024-11-25 12:28:22.693781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.060 [2024-11-25 12:28:22.696128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.060 [2024-11-25 12:28:22.696227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.060 [2024-11-25 12:28:22.696239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.060 [2024-11-25 12:28:22.698568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.060 [2024-11-25 12:28:22.698661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.060 [2024-11-25 12:28:22.698673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.060 [2024-11-25 12:28:22.700997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.060 [2024-11-25 12:28:22.701097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.060 [2024-11-25 12:28:22.701116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.060 [2024-11-25 12:28:22.703427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.060 [2024-11-25 12:28:22.703550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.060 [2024-11-25 12:28:22.703563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.060 [2024-11-25 12:28:22.705924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.060 [2024-11-25 12:28:22.706032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.060 [2024-11-25 12:28:22.706052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.060 [2024-11-25 12:28:22.708343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.060 [2024-11-25 12:28:22.708459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.060 [2024-11-25 12:28:22.708478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.060 [2024-11-25 12:28:22.710775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.060 [2024-11-25 12:28:22.710879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.060 [2024-11-25 12:28:22.710898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.060 [2024-11-25 12:28:22.713182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.060 [2024-11-25 12:28:22.713283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.060 [2024-11-25 12:28:22.713302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.060 [2024-11-25 12:28:22.715583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.060 [2024-11-25 12:28:22.715697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.060 [2024-11-25 12:28:22.715710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.060 [2024-11-25 12:28:22.718085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.060 [2024-11-25 12:28:22.718194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.060 [2024-11-25 12:28:22.718212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.060 [2024-11-25 12:28:22.720521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.060 [2024-11-25 12:28:22.720622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.060 [2024-11-25 12:28:22.720635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.060 [2024-11-25 12:28:22.722960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.060 [2024-11-25 12:28:22.723056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.060 [2024-11-25 12:28:22.723075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.060 [2024-11-25 12:28:22.725359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.060 [2024-11-25 12:28:22.725473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.060 [2024-11-25 12:28:22.725493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.060 [2024-11-25 12:28:22.727801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.060 [2024-11-25 12:28:22.727924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.060 [2024-11-25 12:28:22.727943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.060 [2024-11-25 12:28:22.730236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.060 [2024-11-25 12:28:22.730338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.060 [2024-11-25 12:28:22.730358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.060 [2024-11-25 12:28:22.732659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.060 [2024-11-25 12:28:22.732774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.060 [2024-11-25 12:28:22.732791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.060 [2024-11-25 12:28:22.735093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.060 [2024-11-25 12:28:22.735185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.060 [2024-11-25 12:28:22.735204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.060 [2024-11-25 12:28:22.737509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.060 [2024-11-25 12:28:22.737610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.060 [2024-11-25 12:28:22.737628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.060 [2024-11-25 12:28:22.739938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.060 [2024-11-25 12:28:22.740034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.060 [2024-11-25 12:28:22.740048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.060 [2024-11-25 12:28:22.742339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.060 [2024-11-25 12:28:22.742492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.060 [2024-11-25 12:28:22.742505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.060 [2024-11-25 12:28:22.744836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.060 [2024-11-25 12:28:22.744952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.060 [2024-11-25 12:28:22.744999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.060 [2024-11-25 12:28:22.747248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.060 [2024-11-25 12:28:22.747347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.060 [2024-11-25 12:28:22.747360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.060 [2024-11-25 12:28:22.749627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.060 [2024-11-25 12:28:22.749732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.060 [2024-11-25 12:28:22.749745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.060 [2024-11-25 12:28:22.752070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.060 [2024-11-25 12:28:22.752177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.061 [2024-11-25 12:28:22.752190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.061 [2024-11-25 12:28:22.754499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.061 [2024-11-25 12:28:22.754608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.061 [2024-11-25 12:28:22.754622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.061 [2024-11-25 12:28:22.756998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.061 [2024-11-25 12:28:22.757096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.061 [2024-11-25 12:28:22.757116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.061 [2024-11-25 12:28:22.759423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.061 [2024-11-25 12:28:22.759536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.061 [2024-11-25 12:28:22.759555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.061 [2024-11-25 12:28:22.761857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.061 [2024-11-25 12:28:22.761956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.061 [2024-11-25 12:28:22.761976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.061 [2024-11-25 12:28:22.764272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.061 [2024-11-25 12:28:22.764366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.061 [2024-11-25 12:28:22.764385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.061 [2024-11-25 12:28:22.766660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.061 [2024-11-25 12:28:22.766794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.061 [2024-11-25 12:28:22.766807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.061 [2024-11-25 12:28:22.769158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.061 [2024-11-25 12:28:22.769281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.061 [2024-11-25 12:28:22.769300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.061 [2024-11-25 12:28:22.771599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.061 [2024-11-25 12:28:22.771706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.061 [2024-11-25 12:28:22.771725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.061 [2024-11-25 12:28:22.774019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.061 [2024-11-25 12:28:22.774130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.061 [2024-11-25 12:28:22.774163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.061 [2024-11-25 12:28:22.776442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.061 [2024-11-25 12:28:22.776535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.061 [2024-11-25 12:28:22.776553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.061 [2024-11-25 12:28:22.778886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.061 [2024-11-25 12:28:22.779010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.061 [2024-11-25 12:28:22.779029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.061 [2024-11-25 12:28:22.781273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.061 [2024-11-25 12:28:22.781376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.061 [2024-11-25 12:28:22.781395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.061 [2024-11-25 12:28:22.783682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.061 [2024-11-25 12:28:22.783801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.061 [2024-11-25 12:28:22.783818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.061 [2024-11-25 12:28:22.786117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.061 [2024-11-25 12:28:22.786258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.061 [2024-11-25 12:28:22.786276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.061 [2024-11-25 12:28:22.788502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.061 [2024-11-25 12:28:22.788591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.061 [2024-11-25 12:28:22.788609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.061 [2024-11-25 12:28:22.790917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.061 [2024-11-25 12:28:22.791005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.061 [2024-11-25 12:28:22.791023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.061 [2024-11-25 12:28:22.793303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.061 [2024-11-25 12:28:22.793425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.061 [2024-11-25 12:28:22.793438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.061 [2024-11-25 12:28:22.795815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.061 [2024-11-25 12:28:22.795915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.061 [2024-11-25 12:28:22.795934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.061 [2024-11-25 12:28:22.798244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.061 [2024-11-25 12:28:22.798360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.061 [2024-11-25 12:28:22.798379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.061 [2024-11-25 12:28:22.800668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.061 [2024-11-25 12:28:22.800773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.061 [2024-11-25 12:28:22.800787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.061 [2024-11-25 12:28:22.803109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.061 [2024-11-25 12:28:22.803204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.061 [2024-11-25 12:28:22.803219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.061 [2024-11-25 12:28:22.805506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.061 [2024-11-25 12:28:22.805647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.061 [2024-11-25 12:28:22.805660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.061 [2024-11-25 12:28:22.808014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.061 [2024-11-25 12:28:22.808118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.061 [2024-11-25 12:28:22.808137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.061 [2024-11-25 12:28:22.810428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.061 [2024-11-25 12:28:22.810515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.061 [2024-11-25 12:28:22.810529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.061 [2024-11-25 12:28:22.812833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.061 [2024-11-25 12:28:22.812922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.061 [2024-11-25 12:28:22.812942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.061 [2024-11-25 12:28:22.815241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.061 [2024-11-25 12:28:22.815342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.061 [2024-11-25 12:28:22.815361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.061 [2024-11-25 12:28:22.817639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.061 [2024-11-25 12:28:22.817799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.061 [2024-11-25 12:28:22.817813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.061 [2024-11-25 12:28:22.820138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.062 [2024-11-25 12:28:22.820239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.062 [2024-11-25 12:28:22.820258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.062 [2024-11-25 12:28:22.822552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.062 [2024-11-25 12:28:22.822660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.062 [2024-11-25 12:28:22.822679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.062 [2024-11-25 12:28:22.824963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.062 [2024-11-25 12:28:22.825067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.062 [2024-11-25 12:28:22.825086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.062 [2024-11-25 12:28:22.827388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.062 [2024-11-25 12:28:22.827508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.062 [2024-11-25 12:28:22.827528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.062 [2024-11-25 12:28:22.829809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.062 [2024-11-25 12:28:22.829933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.062 [2024-11-25 12:28:22.829951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.062 [2024-11-25 12:28:22.832240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.062 [2024-11-25 12:28:22.832325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.062 [2024-11-25 12:28:22.832338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.062 [2024-11-25 12:28:22.834633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.062 [2024-11-25 12:28:22.834729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.062 [2024-11-25 12:28:22.834760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.062 [2024-11-25 12:28:22.837053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.062 [2024-11-25 12:28:22.837156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.062 [2024-11-25 12:28:22.837174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.062 [2024-11-25 12:28:22.839461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.062 [2024-11-25 12:28:22.839602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.062 [2024-11-25 12:28:22.839620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.062 [2024-11-25 12:28:22.841916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.062 [2024-11-25 12:28:22.842016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.062 [2024-11-25 12:28:22.842035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.062 [2024-11-25 12:28:22.844301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.062 [2024-11-25 12:28:22.844422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.062 [2024-11-25 12:28:22.844435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.062 [2024-11-25 12:28:22.846867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.062 [2024-11-25 12:28:22.846987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.062 [2024-11-25 12:28:22.847005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.062 [2024-11-25 12:28:22.849248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.062 [2024-11-25 12:28:22.849356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.062 [2024-11-25 12:28:22.849376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.062 [2024-11-25 12:28:22.851657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.062 [2024-11-25 12:28:22.851780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.062 [2024-11-25 12:28:22.851797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.062 [2024-11-25 12:28:22.854090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.062 [2024-11-25 12:28:22.854187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.062 [2024-11-25 12:28:22.854201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.062 [2024-11-25 12:28:22.856473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.062 [2024-11-25 12:28:22.856613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.062 [2024-11-25 12:28:22.856626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.062 [2024-11-25 12:28:22.859009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.062 [2024-11-25 12:28:22.859093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.062 [2024-11-25 12:28:22.859107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.062 [2024-11-25 12:28:22.861397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.062 [2024-11-25 12:28:22.861496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.062 [2024-11-25 12:28:22.861509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.062 [2024-11-25 12:28:22.863808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.062 [2024-11-25 12:28:22.863923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.062 [2024-11-25 12:28:22.863943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.062 [2024-11-25 12:28:22.866241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.062 [2024-11-25 12:28:22.866378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.062 [2024-11-25 12:28:22.866397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.062 [2024-11-25 12:28:22.868610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.062 [2024-11-25 12:28:22.868765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.062 [2024-11-25 12:28:22.868779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.062 [2024-11-25 12:28:22.871151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.062 [2024-11-25 12:28:22.871253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.062 [2024-11-25 12:28:22.871273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.062 [2024-11-25 12:28:22.873544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.062 [2024-11-25 12:28:22.873669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.062 [2024-11-25 12:28:22.873689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.062 [2024-11-25 12:28:22.875971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.062 [2024-11-25 12:28:22.876075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.062 [2024-11-25 12:28:22.876093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.062 [2024-11-25 12:28:22.878394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.062 [2024-11-25 12:28:22.878498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.062 [2024-11-25 12:28:22.878516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.062 [2024-11-25 12:28:22.880799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.062 [2024-11-25 12:28:22.880922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.062 [2024-11-25 12:28:22.880942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.062 [2024-11-25 12:28:22.883228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.062 [2024-11-25 12:28:22.883329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.062 [2024-11-25 12:28:22.883349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.062 [2024-11-25 12:28:22.885623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.063 [2024-11-25 12:28:22.885723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.063 [2024-11-25 12:28:22.885743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.063 [2024-11-25 12:28:22.888068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.063 [2024-11-25 12:28:22.888181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.063 [2024-11-25 12:28:22.888200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.063 [2024-11-25 12:28:22.890508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.063 [2024-11-25 12:28:22.890610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.063 [2024-11-25 12:28:22.890630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.063 [2024-11-25 12:28:22.892920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.063 [2024-11-25 12:28:22.893028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.063 [2024-11-25 12:28:22.893047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.063 [2024-11-25 12:28:22.895353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.063 [2024-11-25 12:28:22.895466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.063 [2024-11-25 12:28:22.895479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.063 [2024-11-25 12:28:22.897886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.063 [2024-11-25 12:28:22.897966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.063 [2024-11-25 12:28:22.897980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.063 [2024-11-25 12:28:22.900310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.063 [2024-11-25 12:28:22.900433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.063 [2024-11-25 12:28:22.900452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.063 [2024-11-25 12:28:22.902743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.063 [2024-11-25 12:28:22.902865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.063 [2024-11-25 12:28:22.902882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.063 [2024-11-25 12:28:22.905160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.063 [2024-11-25 12:28:22.905265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.063 [2024-11-25 12:28:22.905284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.063 [2024-11-25 12:28:22.907592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.063 [2024-11-25 12:28:22.907713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.063 [2024-11-25 12:28:22.907726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.063 [2024-11-25 12:28:22.910101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.063 [2024-11-25 12:28:22.910216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.063 [2024-11-25 12:28:22.910234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.063 [2024-11-25 12:28:22.912537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.063 [2024-11-25 12:28:22.912657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.063 [2024-11-25 12:28:22.912675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.063 [2024-11-25 12:28:22.914971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.063 [2024-11-25 12:28:22.915092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.063 [2024-11-25 12:28:22.915111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.063 [2024-11-25 12:28:22.917400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.063 [2024-11-25 12:28:22.917502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.063 [2024-11-25 12:28:22.917520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.063 [2024-11-25 12:28:22.919851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.063 [2024-11-25 12:28:22.919975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.063 [2024-11-25 12:28:22.919993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.063 11547.00 IOPS, 1443.38 MiB/s [2024-11-25T12:28:22.939Z] [2024-11-25 12:28:22.923210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.063 [2024-11-25 12:28:22.923322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.063 [2024-11-25 12:28:22.923336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.063 [2024-11-25 12:28:22.925684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.063 [2024-11-25 12:28:22.925759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.063 [2024-11-25 12:28:22.925776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.063 [2024-11-25 12:28:22.928095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.063 [2024-11-25 12:28:22.928157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.063 [2024-11-25 12:28:22.928170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.324 [2024-11-25 12:28:22.930482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.324 [2024-11-25 12:28:22.930594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.324 [2024-11-25 12:28:22.930609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.324 [2024-11-25 12:28:22.932985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.324 [2024-11-25 12:28:22.933048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.324 [2024-11-25 12:28:22.933061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.324 [2024-11-25 12:28:22.935390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.324 [2024-11-25 12:28:22.935505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.324 [2024-11-25 12:28:22.935525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.324 [2024-11-25 12:28:22.937843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.324 [2024-11-25 12:28:22.937899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.324 [2024-11-25 12:28:22.937912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.324 [2024-11-25 12:28:22.940342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.324 [2024-11-25 12:28:22.940404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.324 [2024-11-25 12:28:22.940416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.324 [2024-11-25 12:28:22.942839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.324 [2024-11-25 12:28:22.942909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.324 [2024-11-25 12:28:22.942922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.324 [2024-11-25 12:28:22.945300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.324 [2024-11-25 12:28:22.945357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.325 [2024-11-25 12:28:22.945370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.325 [2024-11-25 12:28:22.947779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.325 [2024-11-25 12:28:22.947846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.325 [2024-11-25 12:28:22.947859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.325 [2024-11-25 12:28:22.950211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.325 [2024-11-25 12:28:22.950324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.325 [2024-11-25 12:28:22.950337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.325 [2024-11-25 12:28:22.952687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.325 [2024-11-25 12:28:22.952762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.325 [2024-11-25 12:28:22.952775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.325 [2024-11-25 12:28:22.955135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.325 [2024-11-25 12:28:22.955192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.325 [2024-11-25 12:28:22.955205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.325 [2024-11-25 12:28:22.957555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.325 [2024-11-25 12:28:22.957702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.325 [2024-11-25 12:28:22.957715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.325 [2024-11-25 12:28:22.960085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.325 [2024-11-25 12:28:22.960146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.325 [2024-11-25 12:28:22.960159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.325 [2024-11-25 12:28:22.962534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.325 [2024-11-25 12:28:22.962598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.325 [2024-11-25 12:28:22.962611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.325 [2024-11-25 12:28:22.964938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.325 [2024-11-25 12:28:22.964991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.325 [2024-11-25 12:28:22.965004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.325 [2024-11-25 12:28:22.967344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.325 [2024-11-25 12:28:22.967403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.325 [2024-11-25 12:28:22.967416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.325 [2024-11-25 12:28:22.969744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.325 [2024-11-25 12:28:22.969821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.325 [2024-11-25 12:28:22.969834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.325 [2024-11-25 12:28:22.972161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.325 [2024-11-25 12:28:22.972216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.325 [2024-11-25 12:28:22.972230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.325 [2024-11-25 12:28:22.974578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.325 [2024-11-25 12:28:22.974633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.325 [2024-11-25 12:28:22.974646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.325 [2024-11-25 12:28:22.977007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.325 [2024-11-25 12:28:22.977086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.325 [2024-11-25 12:28:22.977099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.325 [2024-11-25 12:28:22.979469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.325 [2024-11-25 12:28:22.979525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.325 [2024-11-25 12:28:22.979538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.325 [2024-11-25 12:28:22.981909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.325 [2024-11-25 12:28:22.981982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.325 [2024-11-25 12:28:22.981995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.325 [2024-11-25 12:28:22.984340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.325 [2024-11-25 12:28:22.984453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.325 [2024-11-25 12:28:22.984468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.325 [2024-11-25 12:28:22.986915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.325 [2024-11-25 12:28:22.986980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.325 [2024-11-25 12:28:22.986993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.325 [2024-11-25 12:28:22.989371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.325 [2024-11-25 12:28:22.989425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.325 [2024-11-25 12:28:22.989438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.325 [2024-11-25 12:28:22.991891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.325 [2024-11-25 12:28:22.991951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.325 [2024-11-25 12:28:22.991964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.325 [2024-11-25 12:28:22.994303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.325 [2024-11-25 12:28:22.994362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.325 [2024-11-25 12:28:22.994375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.325 [2024-11-25 12:28:22.996715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.325 [2024-11-25 12:28:22.996857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.325 [2024-11-25 12:28:22.996870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.325 [2024-11-25 12:28:22.999236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.325 [2024-11-25 12:28:22.999303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.325 [2024-11-25 12:28:22.999315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.325 [2024-11-25 12:28:23.001607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.325 [2024-11-25 12:28:23.001664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.325 [2024-11-25 12:28:23.001677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.325 [2024-11-25 12:28:23.004053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.325 [2024-11-25 12:28:23.004131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.325 [2024-11-25 12:28:23.004145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.325 [2024-11-25 12:28:23.006496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.325 [2024-11-25 12:28:23.006569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.325 [2024-11-25 12:28:23.006582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.325 [2024-11-25 12:28:23.008912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.325 [2024-11-25 12:28:23.008965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.325 [2024-11-25 12:28:23.008978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.325 [2024-11-25 12:28:23.011351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.325 [2024-11-25 12:28:23.011465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.326 [2024-11-25 12:28:23.011479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.326 [2024-11-25 12:28:23.013815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.326 [2024-11-25 12:28:23.013862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.326 [2024-11-25 12:28:23.013875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.326 [2024-11-25 12:28:23.016234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.326 [2024-11-25 12:28:23.016294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.326 [2024-11-25 12:28:23.016307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.326 [2024-11-25 12:28:23.018654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.326 [2024-11-25 12:28:23.018723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.326 [2024-11-25 12:28:23.018735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.326 [2024-11-25 12:28:23.021061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.326 [2024-11-25 12:28:23.021117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.326 [2024-11-25 12:28:23.021130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.326 [2024-11-25 12:28:23.023481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.326 [2024-11-25 12:28:23.023603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.326 [2024-11-25 12:28:23.023616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.326 [2024-11-25 12:28:23.025993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.326 [2024-11-25 12:28:23.026058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.326 [2024-11-25 12:28:23.026070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.326 [2024-11-25 12:28:23.028451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.326 [2024-11-25 12:28:23.028525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.326 [2024-11-25 12:28:23.028538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.326 [2024-11-25 12:28:23.030928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.326 [2024-11-25 12:28:23.030991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.326 [2024-11-25 12:28:23.031004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.326 [2024-11-25 12:28:23.033324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.326 [2024-11-25 12:28:23.033375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.326 [2024-11-25 12:28:23.033388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.326 [2024-11-25 12:28:23.035771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.326 [2024-11-25 12:28:23.035833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.326 [2024-11-25 12:28:23.035846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.326 [2024-11-25 12:28:23.038163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.326 [2024-11-25 12:28:23.038235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.326 [2024-11-25 12:28:23.038248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.326 [2024-11-25 12:28:23.040547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.326 [2024-11-25 12:28:23.040623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.326 [2024-11-25 12:28:23.040635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.326 [2024-11-25 12:28:23.043014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.326 [2024-11-25 12:28:23.043073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.326 [2024-11-25 12:28:23.043086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.326 [2024-11-25 12:28:23.045394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.326 [2024-11-25 12:28:23.045472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.326 [2024-11-25 12:28:23.045485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.326 [2024-11-25 12:28:23.047816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.326 [2024-11-25 12:28:23.047876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.326 [2024-11-25 12:28:23.047888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.326 [2024-11-25 12:28:23.050242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.326 [2024-11-25 12:28:23.050358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.326 [2024-11-25 12:28:23.050371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.326 [2024-11-25 12:28:23.052740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.326 [2024-11-25 12:28:23.052814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.326 [2024-11-25 12:28:23.052827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.326 [2024-11-25 12:28:23.055157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.326 [2024-11-25 12:28:23.055210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.326 [2024-11-25 12:28:23.055223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.326 [2024-11-25 12:28:23.057579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.326 [2024-11-25 12:28:23.057697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.326 [2024-11-25 12:28:23.057710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.326 [2024-11-25 12:28:23.060122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.326 [2024-11-25 12:28:23.060179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.326 [2024-11-25 12:28:23.060192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.326 [2024-11-25 12:28:23.062526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.327 [2024-11-25 12:28:23.062582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.327 [2024-11-25 12:28:23.062596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.327 [2024-11-25 12:28:23.064957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.327 [2024-11-25 12:28:23.065014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.327 [2024-11-25 12:28:23.065027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.327 [2024-11-25 12:28:23.067383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.327 [2024-11-25 12:28:23.067442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.327 [2024-11-25 12:28:23.067454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.327 [2024-11-25 12:28:23.069745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.327 [2024-11-25 12:28:23.069832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.327 [2024-11-25 12:28:23.069845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.327 [2024-11-25 12:28:23.072138] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.327 [2024-11-25 12:28:23.072192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.327 [2024-11-25 12:28:23.072204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.327 [2024-11-25 12:28:23.074560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.327 [2024-11-25 12:28:23.074618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.327 [2024-11-25 12:28:23.074631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.327 [2024-11-25 12:28:23.076989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.327 [2024-11-25 12:28:23.077052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.327 [2024-11-25 12:28:23.077066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.327 [2024-11-25 12:28:23.079424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.327 [2024-11-25 12:28:23.079479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.327 [2024-11-25 12:28:23.079492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.327 [2024-11-25 12:28:23.081841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.327 [2024-11-25 12:28:23.081919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.327 [2024-11-25 12:28:23.081932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.327 [2024-11-25 12:28:23.084274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.327 [2024-11-25 12:28:23.084420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.327 [2024-11-25 12:28:23.084433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.327 [2024-11-25 12:28:23.086788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.327 [2024-11-25 12:28:23.086856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.327 [2024-11-25 12:28:23.086869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.327 [2024-11-25 12:28:23.089229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.327 [2024-11-25 12:28:23.089286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.327 [2024-11-25 12:28:23.089299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.327 [2024-11-25 12:28:23.091641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.327 [2024-11-25 12:28:23.091775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.327 [2024-11-25 12:28:23.091788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.327 [2024-11-25 12:28:23.094152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.327 [2024-11-25 12:28:23.094234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.327 [2024-11-25 12:28:23.094246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.327 [2024-11-25 12:28:23.096566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.327 [2024-11-25 12:28:23.096628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.327 [2024-11-25 12:28:23.096641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.327 [2024-11-25 12:28:23.098957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.327 [2024-11-25 12:28:23.099010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.327 [2024-11-25 12:28:23.099023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.327 [2024-11-25 12:28:23.101373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.327 [2024-11-25 12:28:23.101446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.327 [2024-11-25 12:28:23.101459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.327 [2024-11-25 12:28:23.103807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.327 [2024-11-25 12:28:23.103898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.327 [2024-11-25 12:28:23.103917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.327 [2024-11-25 12:28:23.106254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.327 [2024-11-25 12:28:23.106317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.327 [2024-11-25 12:28:23.106330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.327 [2024-11-25 12:28:23.108631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.327 [2024-11-25 12:28:23.108696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.327 [2024-11-25 12:28:23.108708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.327 [2024-11-25 12:28:23.111101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.327 [2024-11-25 12:28:23.111158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.327 [2024-11-25 12:28:23.111172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.327 [2024-11-25 12:28:23.113535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.327 [2024-11-25 12:28:23.113591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.327 [2024-11-25 12:28:23.113605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.327 [2024-11-25 12:28:23.115947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.327 [2024-11-25 12:28:23.116032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.327 [2024-11-25 12:28:23.116045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.327 [2024-11-25 12:28:23.118353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.327 [2024-11-25 12:28:23.118474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.327 [2024-11-25 12:28:23.118486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.327 [2024-11-25 12:28:23.120880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.327 [2024-11-25 12:28:23.120937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.327 [2024-11-25 12:28:23.120950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.327 [2024-11-25 12:28:23.123278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.327 [2024-11-25 12:28:23.123335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.327 [2024-11-25 12:28:23.123348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.327 [2024-11-25 12:28:23.125688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.327 [2024-11-25 12:28:23.125765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.327 [2024-11-25 12:28:23.125778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.328 [2024-11-25 12:28:23.128108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.328 [2024-11-25 12:28:23.128163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.328 [2024-11-25 12:28:23.128176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.328 [2024-11-25 12:28:23.130514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.328 [2024-11-25 12:28:23.130630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.328 [2024-11-25 12:28:23.130643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.328 [2024-11-25 12:28:23.133024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.328 [2024-11-25 12:28:23.133097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.328 [2024-11-25 12:28:23.133109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.328 [2024-11-25 12:28:23.135437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.328 [2024-11-25 12:28:23.135518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.328 [2024-11-25 12:28:23.135531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.328 [2024-11-25 12:28:23.137932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.328 [2024-11-25 12:28:23.137989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.328 [2024-11-25 12:28:23.138003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.328 [2024-11-25 12:28:23.140311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.328 [2024-11-25 12:28:23.140376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.328 [2024-11-25 12:28:23.140389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.328 [2024-11-25 12:28:23.142727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.328 [2024-11-25 12:28:23.142860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.328 [2024-11-25 12:28:23.142872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.328 [2024-11-25 12:28:23.145264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.328 [2024-11-25 12:28:23.145321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.328 [2024-11-25 12:28:23.145334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.328 [2024-11-25 12:28:23.147669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.328 [2024-11-25 12:28:23.147731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.328 [2024-11-25 12:28:23.147744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.328 [2024-11-25 12:28:23.150124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.328 [2024-11-25 12:28:23.150185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.328 [2024-11-25 12:28:23.150199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.328 [2024-11-25 12:28:23.152584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.328 [2024-11-25 12:28:23.152642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.328 [2024-11-25 12:28:23.152655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.328 [2024-11-25 12:28:23.154985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.328 [2024-11-25 12:28:23.155054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.328 [2024-11-25 12:28:23.155068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.328 [2024-11-25 12:28:23.157441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.328 [2024-11-25 12:28:23.157553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.328 [2024-11-25 12:28:23.157566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.328 [2024-11-25 12:28:23.159966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.328 [2024-11-25 12:28:23.160033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.328 [2024-11-25 12:28:23.160045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.328 [2024-11-25 12:28:23.162374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.328 [2024-11-25 12:28:23.162427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.328 [2024-11-25 12:28:23.162439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.328 [2024-11-25 12:28:23.164788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.328 [2024-11-25 12:28:23.164842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.328 [2024-11-25 12:28:23.164855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.328 [2024-11-25 12:28:23.167230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.328 [2024-11-25 12:28:23.167287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.328 [2024-11-25 12:28:23.167300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.328 [2024-11-25 12:28:23.169636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.328 [2024-11-25 12:28:23.169745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.328 [2024-11-25 12:28:23.169770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.328 [2024-11-25 12:28:23.172160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.328 [2024-11-25 12:28:23.172240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.328 [2024-11-25 12:28:23.172253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.328 [2024-11-25 12:28:23.174581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.328 [2024-11-25 12:28:23.174638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.328 [2024-11-25 12:28:23.174651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.328 [2024-11-25 12:28:23.176988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.328 [2024-11-25 12:28:23.177041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.328 [2024-11-25 12:28:23.177054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.328 [2024-11-25 12:28:23.179404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.328 [2024-11-25 12:28:23.179459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.328 [2024-11-25 12:28:23.179472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.328 [2024-11-25 12:28:23.181820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.328 [2024-11-25 12:28:23.181874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.328 [2024-11-25 12:28:23.181887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.328 [2024-11-25 12:28:23.184215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.328 [2024-11-25 12:28:23.184337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.328 [2024-11-25 12:28:23.184349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.328 [2024-11-25 12:28:23.186733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.328 [2024-11-25 12:28:23.186804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.328 [2024-11-25 12:28:23.186818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.328 [2024-11-25 12:28:23.189166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.328 [2024-11-25 12:28:23.189246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.328 [2024-11-25 12:28:23.189258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.328 [2024-11-25 12:28:23.191558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.328 [2024-11-25 12:28:23.191695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.329 [2024-11-25 12:28:23.191708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.329 [2024-11-25 12:28:23.194039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.329 [2024-11-25 12:28:23.194095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.329 [2024-11-25 12:28:23.194108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.329 [2024-11-25 12:28:23.196492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.329 [2024-11-25 12:28:23.196554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.329 [2024-11-25 12:28:23.196567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.591 [2024-11-25 12:28:23.198959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.591 [2024-11-25 12:28:23.199013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.591 [2024-11-25 12:28:23.199027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.591 [2024-11-25 12:28:23.201377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.591 [2024-11-25 12:28:23.201430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.591 [2024-11-25 12:28:23.201443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.591 [2024-11-25 12:28:23.203802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.591 [2024-11-25 12:28:23.203856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.591 [2024-11-25 12:28:23.203869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.591 [2024-11-25 12:28:23.206220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.591 [2024-11-25 12:28:23.206275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.591 [2024-11-25 12:28:23.206288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.591 [2024-11-25 12:28:23.208639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.591 [2024-11-25 12:28:23.208713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.591 [2024-11-25 12:28:23.208726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.591 [2024-11-25 12:28:23.211085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.591 [2024-11-25 12:28:23.211141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.591 [2024-11-25 12:28:23.211154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.591 [2024-11-25 12:28:23.213486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.591 [2024-11-25 12:28:23.213541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.591 [2024-11-25 12:28:23.213554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.591 [2024-11-25 12:28:23.215931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.591 [2024-11-25 12:28:23.215986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.591 [2024-11-25 12:28:23.215999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.591 [2024-11-25 12:28:23.218347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.591 [2024-11-25 12:28:23.218411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.591 [2024-11-25 12:28:23.218424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.591 [2024-11-25 12:28:23.220735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.591 [2024-11-25 12:28:23.220801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.591 [2024-11-25 12:28:23.220815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.591 [2024-11-25 12:28:23.223159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.591 [2024-11-25 12:28:23.223222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.591 [2024-11-25 12:28:23.223235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.591 [2024-11-25 12:28:23.225569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.591 [2024-11-25 12:28:23.225632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.591 [2024-11-25 12:28:23.225645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.591 [2024-11-25 12:28:23.228013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.591 [2024-11-25 12:28:23.228066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.591 [2024-11-25 12:28:23.228080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.592 [2024-11-25 12:28:23.230429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.592 [2024-11-25 12:28:23.230514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.592 [2024-11-25 12:28:23.230527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.592 [2024-11-25 12:28:23.232882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.592 [2024-11-25 12:28:23.232938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.592 [2024-11-25 12:28:23.232951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.592 [2024-11-25 12:28:23.235315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.592 [2024-11-25 12:28:23.235372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.592 [2024-11-25 12:28:23.235385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.592 [2024-11-25 12:28:23.237741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.592 [2024-11-25 12:28:23.237817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.592 [2024-11-25 12:28:23.237830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.592 [2024-11-25 12:28:23.240155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.592 [2024-11-25 12:28:23.240245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.592 [2024-11-25 12:28:23.240264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.592 [2024-11-25 12:28:23.242592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.592 [2024-11-25 12:28:23.242647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.592 [2024-11-25 12:28:23.242660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.592 [2024-11-25 12:28:23.244997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.592 [2024-11-25 12:28:23.245059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.592 [2024-11-25 12:28:23.245072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.592 [2024-11-25 12:28:23.247425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.592 [2024-11-25 12:28:23.247474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.592 [2024-11-25 12:28:23.247487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.592 [2024-11-25 12:28:23.249844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.592 [2024-11-25 12:28:23.249922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.592 [2024-11-25 12:28:23.249934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.592 [2024-11-25 12:28:23.252242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.592 [2024-11-25 12:28:23.252315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.592 [2024-11-25 12:28:23.252328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.592 [2024-11-25 12:28:23.254647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.592 [2024-11-25 12:28:23.254699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.592 [2024-11-25 12:28:23.254712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.592 [2024-11-25 12:28:23.257086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.592 [2024-11-25 12:28:23.257139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.592 [2024-11-25 12:28:23.257152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.592 [2024-11-25 12:28:23.259479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.592 [2024-11-25 12:28:23.259536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.592 [2024-11-25 12:28:23.259548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.592 [2024-11-25 12:28:23.261894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.592 [2024-11-25 12:28:23.261951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.592 [2024-11-25 12:28:23.261964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.592 [2024-11-25 12:28:23.264270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.592 [2024-11-25 12:28:23.264322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.592 [2024-11-25 12:28:23.264335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.592 [2024-11-25 12:28:23.266746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.592 [2024-11-25 12:28:23.266814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.592 [2024-11-25 12:28:23.266827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.592 [2024-11-25 12:28:23.269167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.592 [2024-11-25 12:28:23.269224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.592 [2024-11-25 12:28:23.269237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.592 [2024-11-25 12:28:23.271557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.592 [2024-11-25 12:28:23.271636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.592 [2024-11-25 12:28:23.271655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.592 [2024-11-25 12:28:23.273990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.592 [2024-11-25 12:28:23.274053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.592 [2024-11-25 12:28:23.274065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.592 [2024-11-25 12:28:23.276391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.592 [2024-11-25 12:28:23.276441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.592 [2024-11-25 12:28:23.276454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.592 [2024-11-25 12:28:23.278829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.592 [2024-11-25 12:28:23.278885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.592 [2024-11-25 12:28:23.278898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.592 [2024-11-25 12:28:23.281245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.592 [2024-11-25 12:28:23.281323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.592 [2024-11-25 12:28:23.281336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.592 [2024-11-25 12:28:23.283647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.592 [2024-11-25 12:28:23.283729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.592 [2024-11-25 12:28:23.283741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.592 [2024-11-25 12:28:23.286092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.592 [2024-11-25 12:28:23.286168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.592 [2024-11-25 12:28:23.286181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.592 [2024-11-25 12:28:23.288522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.592 [2024-11-25 12:28:23.288577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.592 [2024-11-25 12:28:23.288590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.592 [2024-11-25 12:28:23.290955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.592 [2024-11-25 12:28:23.291011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.592 [2024-11-25 12:28:23.291024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.592 [2024-11-25 12:28:23.293384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.592 [2024-11-25 12:28:23.293443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.592 [2024-11-25 12:28:23.293456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.592 [2024-11-25 12:28:23.295804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.593 [2024-11-25 12:28:23.295877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.593 [2024-11-25 12:28:23.295890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.593 [2024-11-25 12:28:23.298234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.593 [2024-11-25 12:28:23.298290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.593 [2024-11-25 12:28:23.298304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.593 [2024-11-25 12:28:23.300635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.593 [2024-11-25 12:28:23.300699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.593 [2024-11-25 12:28:23.300711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.593 [2024-11-25 12:28:23.303052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.593 [2024-11-25 12:28:23.303115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.593 [2024-11-25 12:28:23.303128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.593 [2024-11-25 12:28:23.305472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.593 [2024-11-25 12:28:23.305531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.593 [2024-11-25 12:28:23.305544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.593 [2024-11-25 12:28:23.307906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.593 [2024-11-25 12:28:23.307958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.593 [2024-11-25 12:28:23.307972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.593 [2024-11-25 12:28:23.310343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.593 [2024-11-25 12:28:23.310395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.593 [2024-11-25 12:28:23.310408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.593 [2024-11-25 12:28:23.312737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.593 [2024-11-25 12:28:23.312806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.593 [2024-11-25 12:28:23.312819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.593 [2024-11-25 12:28:23.315157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.593 [2024-11-25 12:28:23.315214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.593 [2024-11-25 12:28:23.315227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.593 [2024-11-25 12:28:23.317570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.593 [2024-11-25 12:28:23.317627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.593 [2024-11-25 12:28:23.317639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.593 [2024-11-25 12:28:23.319973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.593 [2024-11-25 12:28:23.320030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.593 [2024-11-25 12:28:23.320043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.593 [2024-11-25 12:28:23.322411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.593 [2024-11-25 12:28:23.322471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.593 [2024-11-25 12:28:23.322484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.593 [2024-11-25 12:28:23.324825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.593 [2024-11-25 12:28:23.324883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.593 [2024-11-25 12:28:23.324896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.593 [2024-11-25 12:28:23.327234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.593 [2024-11-25 12:28:23.327287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.593 [2024-11-25 12:28:23.327301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.593 [2024-11-25 12:28:23.329677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.593 [2024-11-25 12:28:23.329730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.593 [2024-11-25 12:28:23.329743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.593 [2024-11-25 12:28:23.332115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.593 [2024-11-25 12:28:23.332170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.593 [2024-11-25 12:28:23.332183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.593 [2024-11-25 12:28:23.334532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.593 [2024-11-25 12:28:23.334585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.593 [2024-11-25 12:28:23.334598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.593 [2024-11-25 12:28:23.336932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.593 [2024-11-25 12:28:23.336990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.593 [2024-11-25 12:28:23.337003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.593 [2024-11-25 12:28:23.339364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.593 [2024-11-25 12:28:23.339420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.593 [2024-11-25 12:28:23.339433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.593 [2024-11-25 12:28:23.341767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.593 [2024-11-25 12:28:23.341829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.593 [2024-11-25 12:28:23.341843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.593 [2024-11-25 12:28:23.344164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.593 [2024-11-25 12:28:23.344281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.593 [2024-11-25 12:28:23.344294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.593 [2024-11-25 12:28:23.346619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.593 [2024-11-25 12:28:23.346682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.593 [2024-11-25 12:28:23.346695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.593 [2024-11-25 12:28:23.349062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.593 [2024-11-25 12:28:23.349115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.593 [2024-11-25 12:28:23.349128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.593 [2024-11-25 12:28:23.351485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.593 [2024-11-25 12:28:23.351549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.593 [2024-11-25 12:28:23.351562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.593 [2024-11-25 12:28:23.353974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.593 [2024-11-25 12:28:23.354037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.593 [2024-11-25 12:28:23.354050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.594 [2024-11-25 12:28:23.356396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.594 [2024-11-25 12:28:23.356449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.594 [2024-11-25 12:28:23.356462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.594 [2024-11-25 12:28:23.358842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.594 [2024-11-25 12:28:23.358899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.594 [2024-11-25 12:28:23.358912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.594 [2024-11-25 12:28:23.361224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.594 [2024-11-25 12:28:23.361310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.594 [2024-11-25 12:28:23.361329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.594 [2024-11-25 12:28:23.363672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.594 [2024-11-25 12:28:23.363728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.594 [2024-11-25 12:28:23.363741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.594 [2024-11-25 12:28:23.366094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.594 [2024-11-25 12:28:23.366163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.594 [2024-11-25 12:28:23.366176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.594 [2024-11-25 12:28:23.368510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.594 [2024-11-25 12:28:23.368570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.594 [2024-11-25 12:28:23.368582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.594 [2024-11-25 12:28:23.370950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.594 [2024-11-25 12:28:23.371007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.594 [2024-11-25 12:28:23.371020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.594 [2024-11-25 12:28:23.373318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.594 [2024-11-25 12:28:23.373379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.594 [2024-11-25 12:28:23.373392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.594 [2024-11-25 12:28:23.375714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.594 [2024-11-25 12:28:23.375789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.594 [2024-11-25 12:28:23.375802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.594 [2024-11-25 12:28:23.378191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.594 [2024-11-25 12:28:23.378243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.594 [2024-11-25 12:28:23.378257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.594 [2024-11-25 12:28:23.380620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.594 [2024-11-25 12:28:23.380676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.594 [2024-11-25 12:28:23.380689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.594 [2024-11-25 12:28:23.383053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.594 [2024-11-25 12:28:23.383143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.594 [2024-11-25 12:28:23.383162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.594 [2024-11-25 12:28:23.385463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.594 [2024-11-25 12:28:23.385548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.594 [2024-11-25 12:28:23.385566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.594 [2024-11-25 12:28:23.387910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.594 [2024-11-25 12:28:23.387967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.594 [2024-11-25 12:28:23.387980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.594 [2024-11-25 12:28:23.390281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.594 [2024-11-25 12:28:23.390341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.594 [2024-11-25 12:28:23.390355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.594 [2024-11-25 12:28:23.392693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.594 [2024-11-25 12:28:23.392744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.594 [2024-11-25 12:28:23.392769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.594 [2024-11-25 12:28:23.395095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.594 [2024-11-25 12:28:23.395149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.594 [2024-11-25 12:28:23.395161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.594 [2024-11-25 12:28:23.397484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.594 [2024-11-25 12:28:23.397537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.594 [2024-11-25 12:28:23.397550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.594 [2024-11-25 12:28:23.399938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.594 [2024-11-25 12:28:23.399991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.594 [2024-11-25 12:28:23.400004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.594 [2024-11-25 12:28:23.402378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.594 [2024-11-25 12:28:23.402453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.594 [2024-11-25 12:28:23.402466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.594 [2024-11-25 12:28:23.404800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.594 [2024-11-25 12:28:23.404854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.594 [2024-11-25 12:28:23.404867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.594 [2024-11-25 12:28:23.407218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.594 [2024-11-25 12:28:23.407275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.594 [2024-11-25 12:28:23.407289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.594 [2024-11-25 12:28:23.409627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.594 [2024-11-25 12:28:23.409714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.594 [2024-11-25 12:28:23.409727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.594 [2024-11-25 12:28:23.412035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.594 [2024-11-25 12:28:23.412125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.594 [2024-11-25 12:28:23.412144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.594 [2024-11-25 12:28:23.414473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.594 [2024-11-25 12:28:23.414538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.594 [2024-11-25 12:28:23.414551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.594 [2024-11-25 12:28:23.416881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.594 [2024-11-25 12:28:23.416946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.594 [2024-11-25 12:28:23.416958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.594 [2024-11-25 12:28:23.419280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.594 [2024-11-25 12:28:23.419337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.594 [2024-11-25 12:28:23.419350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.594 [2024-11-25 12:28:23.421687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.595 [2024-11-25 12:28:23.421783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.595 [2024-11-25 12:28:23.421797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.595 [2024-11-25 12:28:23.424118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.595 [2024-11-25 12:28:23.424167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.595 [2024-11-25 12:28:23.424180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.595 [2024-11-25 12:28:23.426569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.595 [2024-11-25 12:28:23.426633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.595 [2024-11-25 12:28:23.426647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.595 [2024-11-25 12:28:23.429002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.595 [2024-11-25 12:28:23.429065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.595 [2024-11-25 12:28:23.429078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.595 [2024-11-25 12:28:23.431405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.595 [2024-11-25 12:28:23.431467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.595 [2024-11-25 12:28:23.431481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.595 [2024-11-25 12:28:23.433813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.595 [2024-11-25 12:28:23.433869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.595 [2024-11-25 12:28:23.433881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.595 [2024-11-25 12:28:23.436205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.595 [2024-11-25 12:28:23.436270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.595 [2024-11-25 12:28:23.436283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.595 [2024-11-25 12:28:23.438595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.595 [2024-11-25 12:28:23.438683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.595 [2024-11-25 12:28:23.438695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.595 [2024-11-25 12:28:23.441024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.595 [2024-11-25 12:28:23.441105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.595 [2024-11-25 12:28:23.441118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.595 [2024-11-25 12:28:23.443458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.595 [2024-11-25 12:28:23.443519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.595 [2024-11-25 12:28:23.443532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.595 [2024-11-25 12:28:23.445872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.595 [2024-11-25 12:28:23.445922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.595 [2024-11-25 12:28:23.445935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.595 [2024-11-25 12:28:23.448295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.595 [2024-11-25 12:28:23.448348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.595 [2024-11-25 12:28:23.448361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.595 [2024-11-25 12:28:23.450724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.595 [2024-11-25 12:28:23.450793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.595 [2024-11-25 12:28:23.450806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.595 [2024-11-25 12:28:23.453139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.595 [2024-11-25 12:28:23.453225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.595 [2024-11-25 12:28:23.453238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.595 [2024-11-25 12:28:23.455559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.595 [2024-11-25 12:28:23.455611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.595 [2024-11-25 12:28:23.455625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.595 [2024-11-25 12:28:23.457933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.595 [2024-11-25 12:28:23.457989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.595 [2024-11-25 12:28:23.458002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.595 [2024-11-25 12:28:23.460398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.595 [2024-11-25 12:28:23.460462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.595 [2024-11-25 12:28:23.460475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.856 [2024-11-25 12:28:23.462795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.856 [2024-11-25 12:28:23.462867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.856 [2024-11-25 12:28:23.462880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.856 [2024-11-25 12:28:23.465180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.856 [2024-11-25 12:28:23.465230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.856 [2024-11-25 12:28:23.465244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.856 [2024-11-25 12:28:23.467583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.856 [2024-11-25 12:28:23.467638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.856 [2024-11-25 12:28:23.467651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.856 [2024-11-25 12:28:23.470001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.856 [2024-11-25 12:28:23.470061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.856 [2024-11-25 12:28:23.470073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.856 [2024-11-25 12:28:23.472339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.856 [2024-11-25 12:28:23.472467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.856 [2024-11-25 12:28:23.472480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.856 [2024-11-25 12:28:23.474730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.856 [2024-11-25 12:28:23.474798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.857 [2024-11-25 12:28:23.474811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.857 [2024-11-25 12:28:23.477110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.857 [2024-11-25 12:28:23.477198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.857 [2024-11-25 12:28:23.477211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.857 [2024-11-25 12:28:23.479531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.857 [2024-11-25 12:28:23.479584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.857 [2024-11-25 12:28:23.479597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.857 [2024-11-25 12:28:23.481940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.857 [2024-11-25 12:28:23.482016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.857 [2024-11-25 12:28:23.482028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.857 [2024-11-25 12:28:23.484359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.857 [2024-11-25 12:28:23.484411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.857 [2024-11-25 12:28:23.484424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.857 [2024-11-25 12:28:23.486784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.857 [2024-11-25 12:28:23.486870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.857 [2024-11-25 12:28:23.486884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.857 [2024-11-25 12:28:23.489194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.857 [2024-11-25 12:28:23.489258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.857 [2024-11-25 12:28:23.489270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.857 [2024-11-25 12:28:23.491556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.857 [2024-11-25 12:28:23.491621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.857 [2024-11-25 12:28:23.491635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.857 [2024-11-25 12:28:23.493983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.857 [2024-11-25 12:28:23.494079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.857 [2024-11-25 12:28:23.494098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.857 [2024-11-25 12:28:23.496393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.857 [2024-11-25 12:28:23.496470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.857 [2024-11-25 12:28:23.496483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.857 [2024-11-25 12:28:23.498842] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.857 [2024-11-25 12:28:23.498919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.857 [2024-11-25 12:28:23.498932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.857 [2024-11-25 12:28:23.501250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.857 [2024-11-25 12:28:23.501313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.857 [2024-11-25 12:28:23.501326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.857 [2024-11-25 12:28:23.503667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.857 [2024-11-25 12:28:23.503729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.857 [2024-11-25 12:28:23.503742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.857 [2024-11-25 12:28:23.506041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.857 [2024-11-25 12:28:23.506120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.857 [2024-11-25 12:28:23.506132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.857 [2024-11-25 12:28:23.508475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.857 [2024-11-25 12:28:23.508531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.857 [2024-11-25 12:28:23.508545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.857 [2024-11-25 12:28:23.510886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.857 [2024-11-25 12:28:23.510937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.857 [2024-11-25 12:28:23.510951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.857 [2024-11-25 12:28:23.513301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.857 [2024-11-25 12:28:23.513357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.857 [2024-11-25 12:28:23.513370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.857 [2024-11-25 12:28:23.515698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.857 [2024-11-25 12:28:23.515791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.857 [2024-11-25 12:28:23.515804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.857 [2024-11-25 12:28:23.518089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.857 [2024-11-25 12:28:23.518198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.857 [2024-11-25 12:28:23.518211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.857 [2024-11-25 12:28:23.520511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.857 [2024-11-25 12:28:23.520601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.857 [2024-11-25 12:28:23.520614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.857 [2024-11-25 12:28:23.522945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.857 [2024-11-25 12:28:23.523009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.857 [2024-11-25 12:28:23.523023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.857 [2024-11-25 12:28:23.525349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.857 [2024-11-25 12:28:23.525409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.857 [2024-11-25 12:28:23.525422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.857 [2024-11-25 12:28:23.527736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.857 [2024-11-25 12:28:23.527809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.857 [2024-11-25 12:28:23.527822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.857 [2024-11-25 12:28:23.530172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.857 [2024-11-25 12:28:23.530245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.857 [2024-11-25 12:28:23.530258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.857 [2024-11-25 12:28:23.532546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.857 [2024-11-25 12:28:23.532600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.857 [2024-11-25 12:28:23.532613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.857 [2024-11-25 12:28:23.534991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.857 [2024-11-25 12:28:23.535046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.857 [2024-11-25 12:28:23.535059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.857 [2024-11-25 12:28:23.537349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.857 [2024-11-25 12:28:23.537408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.857 [2024-11-25 12:28:23.537421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.857 [2024-11-25 12:28:23.539761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.857 [2024-11-25 12:28:23.539820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.858 [2024-11-25 12:28:23.539833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.858 [2024-11-25 12:28:23.542148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.858 [2024-11-25 12:28:23.542215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.858 [2024-11-25 12:28:23.542228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.858 [2024-11-25 12:28:23.544517] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.858 [2024-11-25 12:28:23.544590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.858 [2024-11-25 12:28:23.544602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.858 [2024-11-25 12:28:23.546914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.858 [2024-11-25 12:28:23.546972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.858 [2024-11-25 12:28:23.546985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.858 [2024-11-25 12:28:23.549341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.858 [2024-11-25 12:28:23.549401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.858 [2024-11-25 12:28:23.549415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.858 [2024-11-25 12:28:23.551732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.858 [2024-11-25 12:28:23.551805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.858 [2024-11-25 12:28:23.551818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.858 [2024-11-25 12:28:23.554165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.858 [2024-11-25 12:28:23.554234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.858 [2024-11-25 12:28:23.554247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.858 [2024-11-25 12:28:23.556537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.858 [2024-11-25 12:28:23.556599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.858 [2024-11-25 12:28:23.556612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.858 [2024-11-25 12:28:23.558965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.858 [2024-11-25 12:28:23.559039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.858 [2024-11-25 12:28:23.559051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.858 [2024-11-25 12:28:23.561385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.858 [2024-11-25 12:28:23.561440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.858 [2024-11-25 12:28:23.561453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.858 [2024-11-25 12:28:23.563841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.858 [2024-11-25 12:28:23.563897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.858 [2024-11-25 12:28:23.563910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.858 [2024-11-25 12:28:23.566204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.858 [2024-11-25 12:28:23.566268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.858 [2024-11-25 12:28:23.566281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.858 [2024-11-25 12:28:23.568617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.858 [2024-11-25 12:28:23.568674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.858 [2024-11-25 12:28:23.568687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.858 [2024-11-25 12:28:23.571002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.858 [2024-11-25 12:28:23.571056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.858 [2024-11-25 12:28:23.571069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.858 [2024-11-25 12:28:23.573442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.858 [2024-11-25 12:28:23.573496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.858 [2024-11-25 12:28:23.573510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.858 [2024-11-25 12:28:23.575882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.858 [2024-11-25 12:28:23.575957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.858 [2024-11-25 12:28:23.575976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.858 [2024-11-25 12:28:23.578305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.858 [2024-11-25 12:28:23.578396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.858 [2024-11-25 12:28:23.578415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.858 [2024-11-25 12:28:23.580729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.858 [2024-11-25 12:28:23.580798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.858 [2024-11-25 12:28:23.580811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.858 [2024-11-25 12:28:23.583127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.858 [2024-11-25 12:28:23.583185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.858 [2024-11-25 12:28:23.583198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.858 [2024-11-25 12:28:23.585510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.858 [2024-11-25 12:28:23.585568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.858 [2024-11-25 12:28:23.585581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.858 [2024-11-25 12:28:23.587910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.858 [2024-11-25 12:28:23.587972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.858 [2024-11-25 12:28:23.587985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.858 [2024-11-25 12:28:23.590310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.858 [2024-11-25 12:28:23.590400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.858 [2024-11-25 12:28:23.590419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.858 [2024-11-25 12:28:23.592693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.858 [2024-11-25 12:28:23.592762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.858 [2024-11-25 12:28:23.592775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.858 [2024-11-25 12:28:23.595152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.858 [2024-11-25 12:28:23.595201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.858 [2024-11-25 12:28:23.595214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.858 [2024-11-25 12:28:23.597564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.858 [2024-11-25 12:28:23.597628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.858 [2024-11-25 12:28:23.597641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.858 [2024-11-25 12:28:23.600014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.858 [2024-11-25 12:28:23.600075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.858 [2024-11-25 12:28:23.600088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.858 [2024-11-25 12:28:23.602387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.858 [2024-11-25 12:28:23.602452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.858 [2024-11-25 12:28:23.602464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.858 [2024-11-25 12:28:23.604816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.858 [2024-11-25 12:28:23.604892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.858 [2024-11-25 12:28:23.604909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.859 [2024-11-25 12:28:23.607218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.859 [2024-11-25 12:28:23.607294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.859 [2024-11-25 12:28:23.607307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.859 [2024-11-25 12:28:23.609618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.859 [2024-11-25 12:28:23.609695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.859 [2024-11-25 12:28:23.609707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.859 [2024-11-25 12:28:23.612026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.859 [2024-11-25 12:28:23.612088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.859 [2024-11-25 12:28:23.612101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.859 [2024-11-25 12:28:23.614444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.859 [2024-11-25 12:28:23.614500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.859 [2024-11-25 12:28:23.614513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.859 [2024-11-25 12:28:23.616822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.859 [2024-11-25 12:28:23.616882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.859 [2024-11-25 12:28:23.616896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.859 [2024-11-25 12:28:23.619226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.859 [2024-11-25 12:28:23.619353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.859 [2024-11-25 12:28:23.619366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.859 [2024-11-25 12:28:23.621611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.859 [2024-11-25 12:28:23.621695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.859 [2024-11-25 12:28:23.621708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.859 [2024-11-25 12:28:23.624020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.859 [2024-11-25 12:28:23.624072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.859 [2024-11-25 12:28:23.624085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.859 [2024-11-25 12:28:23.626436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.859 [2024-11-25 12:28:23.626489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.859 [2024-11-25 12:28:23.626502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.859 [2024-11-25 12:28:23.628858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.859 [2024-11-25 12:28:23.628939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.859 [2024-11-25 12:28:23.628951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.859 [2024-11-25 12:28:23.631300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.859 [2024-11-25 12:28:23.631382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.859 [2024-11-25 12:28:23.631395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.859 [2024-11-25 12:28:23.633703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.859 [2024-11-25 12:28:23.633773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.859 [2024-11-25 12:28:23.633786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.859 [2024-11-25 12:28:23.636137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.859 [2024-11-25 12:28:23.636195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.859 [2024-11-25 12:28:23.636208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.859 [2024-11-25 12:28:23.638570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.859 [2024-11-25 12:28:23.638625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.859 [2024-11-25 12:28:23.638638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.859 [2024-11-25 12:28:23.640996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.859 [2024-11-25 12:28:23.641051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.859 [2024-11-25 12:28:23.641064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.859 [2024-11-25 12:28:23.643427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.859 [2024-11-25 12:28:23.643483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.859 [2024-11-25 12:28:23.643496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.859 [2024-11-25 12:28:23.645833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.859 [2024-11-25 12:28:23.645889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.859 [2024-11-25 12:28:23.645902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.859 [2024-11-25 12:28:23.648257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.859 [2024-11-25 12:28:23.648310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.859 [2024-11-25 12:28:23.648323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.859 [2024-11-25 12:28:23.650710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.859 [2024-11-25 12:28:23.650775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.859 [2024-11-25 12:28:23.650788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.859 [2024-11-25 12:28:23.653154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.859 [2024-11-25 12:28:23.653206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.859 [2024-11-25 12:28:23.653219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.859 [2024-11-25 12:28:23.655576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.859 [2024-11-25 12:28:23.655634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.859 [2024-11-25 12:28:23.655646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.859 [2024-11-25 12:28:23.658019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.859 [2024-11-25 12:28:23.658076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.859 [2024-11-25 12:28:23.658089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.859 [2024-11-25 12:28:23.660466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.859 [2024-11-25 12:28:23.660516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.859 [2024-11-25 12:28:23.660529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.859 [2024-11-25 12:28:23.662887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.859 [2024-11-25 12:28:23.662949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.859 [2024-11-25 12:28:23.662961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.859 [2024-11-25 12:28:23.665299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.859 [2024-11-25 12:28:23.665354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.859 [2024-11-25 12:28:23.665366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.859 [2024-11-25 12:28:23.667713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.859 [2024-11-25 12:28:23.667782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.859 [2024-11-25 12:28:23.667795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.859 [2024-11-25 12:28:23.670169] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.859 [2024-11-25 12:28:23.670222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.859 [2024-11-25 12:28:23.670235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.859 [2024-11-25 12:28:23.672575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.860 [2024-11-25 12:28:23.672649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.860 [2024-11-25 12:28:23.672661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.860 [2024-11-25 12:28:23.675005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.860 [2024-11-25 12:28:23.675060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.860 [2024-11-25 12:28:23.675073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.860 [2024-11-25 12:28:23.677422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.860 [2024-11-25 12:28:23.677475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.860 [2024-11-25 12:28:23.677488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.860 [2024-11-25 12:28:23.679866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.860 [2024-11-25 12:28:23.679917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.860 [2024-11-25 12:28:23.679930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.860 [2024-11-25 12:28:23.682302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.860 [2024-11-25 12:28:23.682357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.860 [2024-11-25 12:28:23.682370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.860 [2024-11-25 12:28:23.684673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.860 [2024-11-25 12:28:23.684759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.860 [2024-11-25 12:28:23.684772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.860 [2024-11-25 12:28:23.687125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.860 [2024-11-25 12:28:23.687180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.860 [2024-11-25 12:28:23.687193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.860 [2024-11-25 12:28:23.689565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.860 [2024-11-25 12:28:23.689624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.860 [2024-11-25 12:28:23.689637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.860 [2024-11-25 12:28:23.691994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.860 [2024-11-25 12:28:23.692060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.860 [2024-11-25 12:28:23.692072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.860 [2024-11-25 12:28:23.694425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.860 [2024-11-25 12:28:23.694481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.860 [2024-11-25 12:28:23.694494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.860 [2024-11-25 12:28:23.696860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.860 [2024-11-25 12:28:23.696939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.860 [2024-11-25 12:28:23.696952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.860 [2024-11-25 12:28:23.699310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.860 [2024-11-25 12:28:23.699385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.860 [2024-11-25 12:28:23.699397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.860 [2024-11-25 12:28:23.701761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.860 [2024-11-25 12:28:23.701822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.860 [2024-11-25 12:28:23.701835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.860 [2024-11-25 12:28:23.704194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.860 [2024-11-25 12:28:23.704254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.860 [2024-11-25 12:28:23.704268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.860 [2024-11-25 12:28:23.706636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.860 [2024-11-25 12:28:23.706685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.860 [2024-11-25 12:28:23.706698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.860 [2024-11-25 12:28:23.709032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.860 [2024-11-25 12:28:23.709094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.860 [2024-11-25 12:28:23.709107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.860 [2024-11-25 12:28:23.711437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.860 [2024-11-25 12:28:23.711495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.860 [2024-11-25 12:28:23.711508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.860 [2024-11-25 12:28:23.713853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.860 [2024-11-25 12:28:23.713913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.860 [2024-11-25 12:28:23.713926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.860 [2024-11-25 12:28:23.716243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.860 [2024-11-25 12:28:23.716302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.860 [2024-11-25 12:28:23.716316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:27.860 [2024-11-25 12:28:23.718660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.860 [2024-11-25 12:28:23.718709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.860 [2024-11-25 12:28:23.718722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:27.860 [2024-11-25 12:28:23.721110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.860 [2024-11-25 12:28:23.721163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.860 [2024-11-25 12:28:23.721176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:27.860 [2024-11-25 12:28:23.723501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.860 [2024-11-25 12:28:23.723576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.860 [2024-11-25 12:28:23.723589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:27.860 [2024-11-25 12:28:23.725928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:27.860 [2024-11-25 12:28:23.725980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.860 [2024-11-25 12:28:23.725993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:28.121 [2024-11-25 12:28:23.728368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.121 [2024-11-25 12:28:23.728427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.121 [2024-11-25 12:28:23.728440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:28.121 [2024-11-25 12:28:23.730811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.121 [2024-11-25 12:28:23.730874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.121 [2024-11-25 12:28:23.730886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:28.121 [2024-11-25 12:28:23.733202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.121 [2024-11-25 12:28:23.733280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.121 [2024-11-25 12:28:23.733300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:28.121 [2024-11-25 12:28:23.735623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.121 [2024-11-25 12:28:23.735677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.121 [2024-11-25 12:28:23.735691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:28.121 [2024-11-25 12:28:23.738033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.121 [2024-11-25 12:28:23.738090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.121 [2024-11-25 12:28:23.738102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:28.121 [2024-11-25 12:28:23.740456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.121 [2024-11-25 12:28:23.740516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.121 [2024-11-25 12:28:23.740529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:28.121 [2024-11-25 12:28:23.742894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.121 [2024-11-25 12:28:23.742949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.121 [2024-11-25 12:28:23.742963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:28.121 [2024-11-25 12:28:23.745290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.121 [2024-11-25 12:28:23.745343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.121 [2024-11-25 12:28:23.745356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:28.121 [2024-11-25 12:28:23.747702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.121 [2024-11-25 12:28:23.747770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.121 [2024-11-25 12:28:23.747783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:28.121 [2024-11-25 12:28:23.750160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.121 [2024-11-25 12:28:23.750223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.121 [2024-11-25 12:28:23.750236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:28.121 [2024-11-25 12:28:23.752551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.121 [2024-11-25 12:28:23.752613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.121 [2024-11-25 12:28:23.752625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:28.121 [2024-11-25 12:28:23.754994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.121 [2024-11-25 12:28:23.755050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.121 [2024-11-25 12:28:23.755063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:28.121 [2024-11-25 12:28:23.757423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.121 [2024-11-25 12:28:23.757481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.121 [2024-11-25 12:28:23.757494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:28.121 [2024-11-25 12:28:23.759826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.121 [2024-11-25 12:28:23.759877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.121 [2024-11-25 12:28:23.759890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:28.121 [2024-11-25 12:28:23.762272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.121 [2024-11-25 12:28:23.762330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.121 [2024-11-25 12:28:23.762344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:28.121 [2024-11-25 12:28:23.764651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.121 [2024-11-25 12:28:23.764726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.121 [2024-11-25 12:28:23.764739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:28.121 [2024-11-25 12:28:23.767066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.121 [2024-11-25 12:28:23.767123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.121 [2024-11-25 12:28:23.767136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:28.121 [2024-11-25 12:28:23.769482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.121 [2024-11-25 12:28:23.769538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.121 [2024-11-25 12:28:23.769551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:28.121 [2024-11-25 12:28:23.771897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.121 [2024-11-25 12:28:23.771957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.121 [2024-11-25 12:28:23.771970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:28.121 [2024-11-25 12:28:23.774304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.121 [2024-11-25 12:28:23.774365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.121 [2024-11-25 12:28:23.774379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:28.121 [2024-11-25 12:28:23.776720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.121 [2024-11-25 12:28:23.776805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.121 [2024-11-25 12:28:23.776818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:28.121 [2024-11-25 12:28:23.779180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.121 [2024-11-25 12:28:23.779232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.121 [2024-11-25 12:28:23.779245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:28.121 [2024-11-25 12:28:23.781553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.121 [2024-11-25 12:28:23.781635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.121 [2024-11-25 12:28:23.781647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:28.121 [2024-11-25 12:28:23.783961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.122 [2024-11-25 12:28:23.784019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.122 [2024-11-25 12:28:23.784032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:28.122 [2024-11-25 12:28:23.786379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.122 [2024-11-25 12:28:23.786437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.122 [2024-11-25 12:28:23.786450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:28.122 [2024-11-25 12:28:23.788766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.122 [2024-11-25 12:28:23.788839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.122 [2024-11-25 12:28:23.788852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:28.122 [2024-11-25 12:28:23.791213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.122 [2024-11-25 12:28:23.791272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.122 [2024-11-25 12:28:23.791286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:28.122 [2024-11-25 12:28:23.793599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.122 [2024-11-25 12:28:23.793677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.122 [2024-11-25 12:28:23.793695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:28.122 [2024-11-25 12:28:23.796044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.122 [2024-11-25 12:28:23.796104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.122 [2024-11-25 12:28:23.796117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:28.122 [2024-11-25 12:28:23.798477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.122 [2024-11-25 12:28:23.798537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.122 [2024-11-25 12:28:23.798550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:28.122 [2024-11-25 12:28:23.800922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.122 [2024-11-25 12:28:23.800981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.122 [2024-11-25 12:28:23.800994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:28.122 [2024-11-25 12:28:23.803363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.122 [2024-11-25 12:28:23.803422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.122 [2024-11-25 12:28:23.803435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:28.122 [2024-11-25 12:28:23.805872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.122 [2024-11-25 12:28:23.805909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.122 [2024-11-25 12:28:23.805922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:28.122 [2024-11-25 12:28:23.808253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.122 [2024-11-25 12:28:23.808318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.122 [2024-11-25 12:28:23.808331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:28.122 [2024-11-25 12:28:23.810686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.122 [2024-11-25 12:28:23.810742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.122 [2024-11-25 12:28:23.810767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:28.122 [2024-11-25 12:28:23.813119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.122 [2024-11-25 12:28:23.813174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.122 [2024-11-25 12:28:23.813186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:28.122 [2024-11-25 12:28:23.815516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.122 [2024-11-25 12:28:23.815567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.122 [2024-11-25 12:28:23.815580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:28.122 [2024-11-25 12:28:23.817907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.122 [2024-11-25 12:28:23.817959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.122 [2024-11-25 12:28:23.817972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:28.122 [2024-11-25 12:28:23.820313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.122 [2024-11-25 12:28:23.820374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.122 [2024-11-25 12:28:23.820386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:28.122 [2024-11-25 12:28:23.822741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.122 [2024-11-25 12:28:23.822809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.122 [2024-11-25 12:28:23.822822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:28.122 [2024-11-25 12:28:23.825131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.122 [2024-11-25 12:28:23.825186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.122 [2024-11-25 12:28:23.825200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:28.122 [2024-11-25 12:28:23.827541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.122 [2024-11-25 12:28:23.827590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.122 [2024-11-25 12:28:23.827603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:28.122 [2024-11-25 12:28:23.829952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.122 [2024-11-25 12:28:23.830005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.122 [2024-11-25 12:28:23.830018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:28.122 [2024-11-25 12:28:23.832377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.122 [2024-11-25 12:28:23.832435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.122 [2024-11-25 12:28:23.832448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:28.122 [2024-11-25 12:28:23.834771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.122 [2024-11-25 12:28:23.834844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.122 [2024-11-25 12:28:23.834857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:28.122 [2024-11-25 12:28:23.837174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.122 [2024-11-25 12:28:23.837233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.122 [2024-11-25 12:28:23.837247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:28.122 [2024-11-25 12:28:23.839546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.122 [2024-11-25 12:28:23.839628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.122 [2024-11-25 12:28:23.839647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:28.122 [2024-11-25 12:28:23.841921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.122 [2024-11-25 12:28:23.841993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.122 [2024-11-25 12:28:23.842006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:28.122 [2024-11-25 12:28:23.844339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.122 [2024-11-25 12:28:23.844391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.122 [2024-11-25 12:28:23.844404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:28.122 [2024-11-25 12:28:23.846744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.122 [2024-11-25 12:28:23.846813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.122 [2024-11-25 12:28:23.846827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:28.122 [2024-11-25 12:28:23.849127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.122 [2024-11-25 12:28:23.849180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.123 [2024-11-25 12:28:23.849193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:28.123 [2024-11-25 12:28:23.851540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.123 [2024-11-25 12:28:23.851592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.123 [2024-11-25 12:28:23.851606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:28.123 [2024-11-25 12:28:23.853961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.123 [2024-11-25 12:28:23.854028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.123 [2024-11-25 12:28:23.854041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:28.123 [2024-11-25 12:28:23.856331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.123 [2024-11-25 12:28:23.856413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.123 [2024-11-25 12:28:23.856426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:28.123 [2024-11-25 12:28:23.858778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.123 [2024-11-25 12:28:23.858833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.123 [2024-11-25 12:28:23.858847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:28.123 [2024-11-25 12:28:23.861177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.123 [2024-11-25 12:28:23.861232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.123 [2024-11-25 12:28:23.861246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:28.123 [2024-11-25 12:28:23.863571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.123 [2024-11-25 12:28:23.863635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.123 [2024-11-25 12:28:23.863649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:28.123 [2024-11-25 12:28:23.865987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.123 [2024-11-25 12:28:23.866047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.123 [2024-11-25 12:28:23.866060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:28.123 [2024-11-25 12:28:23.868376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.123 [2024-11-25 12:28:23.868438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.123 [2024-11-25 12:28:23.868451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:28.123 [2024-11-25 12:28:23.870785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.123 [2024-11-25 12:28:23.870849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.123 [2024-11-25 12:28:23.870862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:28.123 [2024-11-25 12:28:23.873180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.123 [2024-11-25 12:28:23.873243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.123 [2024-11-25 12:28:23.873256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:28.123 [2024-11-25 12:28:23.875596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.123 [2024-11-25 12:28:23.875656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.123 [2024-11-25 12:28:23.875669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:28.123 [2024-11-25 12:28:23.877992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.123 [2024-11-25 12:28:23.878056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.123 [2024-11-25 12:28:23.878069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:28.123 [2024-11-25 12:28:23.880358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.123 [2024-11-25 12:28:23.880466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.123 [2024-11-25 12:28:23.880480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:28.123 [2024-11-25 12:28:23.882760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.123 [2024-11-25 12:28:23.882816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.123 [2024-11-25 12:28:23.882828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:28.123 [2024-11-25 12:28:23.885157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.123 [2024-11-25 12:28:23.885214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.123 [2024-11-25 12:28:23.885228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:28.123 [2024-11-25 12:28:23.887567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.123 [2024-11-25 12:28:23.887623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.123 [2024-11-25 12:28:23.887636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:28.123 [2024-11-25 12:28:23.889970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.123 [2024-11-25 12:28:23.890026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.123 [2024-11-25 12:28:23.890039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:28.123 [2024-11-25 12:28:23.892352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.123 [2024-11-25 12:28:23.892414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.123 [2024-11-25 12:28:23.892426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:28.123 [2024-11-25 12:28:23.894739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.123 [2024-11-25 12:28:23.894804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.123 [2024-11-25 12:28:23.894817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:28.123 [2024-11-25 12:28:23.897145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.123 [2024-11-25 12:28:23.897208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.123 [2024-11-25 12:28:23.897221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:28.123 [2024-11-25 12:28:23.899524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.123 [2024-11-25 12:28:23.899588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.123 [2024-11-25 12:28:23.899602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:28.123 [2024-11-25 12:28:23.901945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.123 [2024-11-25 12:28:23.902009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.123 [2024-11-25 12:28:23.902022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:28.123 [2024-11-25 12:28:23.904352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.123 [2024-11-25 12:28:23.904416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.123 [2024-11-25 12:28:23.904429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:28.123 [2024-11-25 12:28:23.906746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.123 [2024-11-25 12:28:23.906820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.123 [2024-11-25 12:28:23.906833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:28.123 [2024-11-25 12:28:23.909174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.123 [2024-11-25 12:28:23.909232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.123 [2024-11-25 12:28:23.909245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:28.123 [2024-11-25 12:28:23.911631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.123 [2024-11-25 12:28:23.911728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.123 [2024-11-25 12:28:23.911742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:28.123 [2024-11-25 12:28:23.914072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.123 [2024-11-25 12:28:23.914150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.124 [2024-11-25 12:28:23.914164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:28.124 [2024-11-25 12:28:23.916498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.124 [2024-11-25 12:28:23.916563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.124 [2024-11-25 12:28:23.916576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:28.124 [2024-11-25 12:28:23.918976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.124 [2024-11-25 12:28:23.919036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.124 [2024-11-25 12:28:23.919050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:28.124 [2024-11-25 12:28:23.921366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1b948b0) with pdu=0x2000166ff3c8 00:29:28.124 12166.00 IOPS, 1520.75 MiB/s [2024-11-25T12:28:24.000Z] [2024-11-25 12:28:23.922441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.124 [2024-11-25 12:28:23.922464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:28.124 00:29:28.124 Latency(us) 00:29:28.124 [2024-11-25T12:28:24.000Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:28.124 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:28.124 nvme0n1 : 2.00 12160.02 1520.00 0.00 0.00 1312.92 819.20 3629.69 00:29:28.124 [2024-11-25T12:28:24.000Z] =================================================================================================================== 00:29:28.124 [2024-11-25T12:28:24.000Z] Total : 12160.02 1520.00 0.00 0.00 1312.92 819.20 3629.69 00:29:28.124 { 00:29:28.124 "results": [ 00:29:28.124 { 00:29:28.124 "job": "nvme0n1", 00:29:28.124 "core_mask": "0x2", 00:29:28.124 "workload": "randwrite", 00:29:28.124 "status": "finished", 00:29:28.124 "queue_depth": 16, 00:29:28.124 "io_size": 131072, 00:29:28.124 "runtime": 2.0023, 00:29:28.124 "iops": 12160.015981621136, 00:29:28.124 "mibps": 1520.001997702642, 00:29:28.124 "io_failed": 0, 00:29:28.124 "io_timeout": 0, 00:29:28.124 "avg_latency_us": 1312.924037355777, 00:29:28.124 "min_latency_us": 819.2, 00:29:28.124 "max_latency_us": 3629.686153846154 00:29:28.124 } 00:29:28.124 ], 00:29:28.124 "core_count": 1 00:29:28.124 } 00:29:28.124 12:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:28.124 12:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:28.124 | .driver_specific 00:29:28.124 | .nvme_error 00:29:28.124 | .status_code 00:29:28.124 | .command_transient_transport_error' 00:29:28.124 12:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:28.124 12:28:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:28.382 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 786 > 0 )) 00:29:28.382 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 93833 00:29:28.382 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 93833 ']' 00:29:28.382 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 93833 00:29:28.382 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:28.382 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:28.382 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93833 00:29:28.382 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:28.382 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:28.382 killing process with pid 93833 00:29:28.382 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93833' 00:29:28.382 Received shutdown signal, test time was about 2.000000 seconds 00:29:28.382 00:29:28.382 Latency(us) 00:29:28.382 [2024-11-25T12:28:24.258Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:28.382 [2024-11-25T12:28:24.258Z] =================================================================================================================== 00:29:28.382 [2024-11-25T12:28:24.258Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:28.382 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 93833 00:29:28.382 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 93833 00:29:28.639 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 93524 00:29:28.639 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 93524 ']' 00:29:28.639 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 93524 00:29:28.639 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:28.639 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:28.639 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 93524 00:29:28.639 killing process with pid 93524 00:29:28.639 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:28.639 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:28.639 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 93524' 00:29:28.639 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 93524 00:29:28.639 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 93524 00:29:28.639 00:29:28.639 real 0m16.712s 00:29:28.639 user 0m32.107s 00:29:28.639 sys 0m3.546s 00:29:28.639 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:28.639 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:28.639 ************************************ 00:29:28.639 END TEST nvmf_digest_error 00:29:28.639 ************************************ 00:29:28.639 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:29:28.639 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:29:28.640 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:28.640 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:29:28.897 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:28.897 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:29:28.897 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:28.897 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:28.897 rmmod nvme_tcp 00:29:28.897 rmmod nvme_fabrics 00:29:28.897 rmmod nvme_keyring 00:29:28.897 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:28.897 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:29:28.897 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:29:28.897 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 93524 ']' 00:29:28.897 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 93524 00:29:28.897 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 93524 ']' 00:29:28.897 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 93524 00:29:28.897 Process with pid 93524 is not found 00:29:28.897 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (93524) - No such process 00:29:28.897 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 93524 is not found' 00:29:28.897 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:28.897 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:28.897 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:28.897 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:29:28.897 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:29:28.897 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:28.897 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:29:28.897 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:28.897 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:29:28.897 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:29:28.897 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:29:28.897 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:29:28.897 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:29:28.897 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:29:28.897 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:29:28.897 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:29:28.897 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:29:28.897 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:29:28.897 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:29:28.897 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:29:29.156 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:29.156 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:29.156 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:29:29.156 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:29.156 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:29.156 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.156 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:29:29.156 00:29:29.156 real 0m34.736s 00:29:29.156 user 1m5.456s 00:29:29.156 sys 0m7.389s 00:29:29.156 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:29.156 12:28:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:29.156 ************************************ 00:29:29.156 END TEST nvmf_digest 00:29:29.156 ************************************ 00:29:29.156 12:28:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 1 -eq 1 ]] 00:29:29.156 12:28:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ tcp == \t\c\p ]] 00:29:29.156 12:28:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@38 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:29:29.156 12:28:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:29.156 12:28:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:29.156 12:28:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:29.156 ************************************ 00:29:29.156 START TEST nvmf_mdns_discovery 00:29:29.156 ************************************ 00:29:29.156 12:28:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:29:29.156 * Looking for test storage... 00:29:29.156 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:29:29.156 12:28:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:29.156 12:28:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:29:29.156 12:28:24 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:29.156 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:29.156 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:29.156 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:29.156 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:29.156 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:29:29.156 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:29:29.156 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:29:29.156 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:29:29.156 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:29:29.156 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:29:29.156 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:29:29.156 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:29.156 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@344 -- # case "$op" in 00:29:29.156 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@345 -- # : 1 00:29:29.156 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:29.156 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:29.156 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@365 -- # decimal 1 00:29:29.156 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@353 -- # local d=1 00:29:29.156 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:29.156 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@355 -- # echo 1 00:29:29.156 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:29:29.156 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@366 -- # decimal 2 00:29:29.156 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@353 -- # local d=2 00:29:29.156 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:29.156 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@355 -- # echo 2 00:29:29.156 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:29:29.156 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:29.156 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:29.156 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@368 -- # return 0 00:29:29.156 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:29.156 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:29.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.156 --rc genhtml_branch_coverage=1 00:29:29.156 --rc genhtml_function_coverage=1 00:29:29.156 --rc genhtml_legend=1 00:29:29.156 --rc geninfo_all_blocks=1 00:29:29.156 --rc geninfo_unexecuted_blocks=1 00:29:29.156 00:29:29.156 ' 00:29:29.156 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:29.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.156 --rc genhtml_branch_coverage=1 00:29:29.156 --rc genhtml_function_coverage=1 00:29:29.156 --rc genhtml_legend=1 00:29:29.156 --rc geninfo_all_blocks=1 00:29:29.156 --rc geninfo_unexecuted_blocks=1 00:29:29.156 00:29:29.156 ' 00:29:29.156 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:29.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.156 --rc genhtml_branch_coverage=1 00:29:29.156 --rc genhtml_function_coverage=1 00:29:29.156 --rc genhtml_legend=1 00:29:29.156 --rc geninfo_all_blocks=1 00:29:29.156 --rc geninfo_unexecuted_blocks=1 00:29:29.156 00:29:29.156 ' 00:29:29.156 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:29.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:29.156 --rc genhtml_branch_coverage=1 00:29:29.156 --rc genhtml_function_coverage=1 00:29:29.156 --rc genhtml_legend=1 00:29:29.156 --rc geninfo_all_blocks=1 00:29:29.156 --rc geninfo_unexecuted_blocks=1 00:29:29.156 00:29:29.156 ' 00:29:29.156 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:29.157 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # uname -s 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@5 -- # export PATH 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@51 -- # : 0 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:29.415 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@13 -- # DISCOVERY_FILTER=address 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@14 -- # DISCOVERY_PORT=8009 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@18 -- # NQN=nqn.2016-06.io.spdk:cnode 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@19 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@21 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@22 -- # HOST_SOCK=/tmp/host.sock 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@24 -- # nvmftestinit 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:29:29.415 Cannot find device "nvmf_init_br" 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@162 -- # true 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:29:29.415 Cannot find device "nvmf_init_br2" 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@163 -- # true 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:29:29.415 Cannot find device "nvmf_tgt_br" 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@164 -- # true 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:29:29.415 Cannot find device "nvmf_tgt_br2" 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@165 -- # true 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:29:29.415 Cannot find device "nvmf_init_br" 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@166 -- # true 00:29:29.415 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:29:29.415 Cannot find device "nvmf_init_br2" 00:29:29.416 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@167 -- # true 00:29:29.416 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:29:29.416 Cannot find device "nvmf_tgt_br" 00:29:29.416 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@168 -- # true 00:29:29.416 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:29:29.416 Cannot find device "nvmf_tgt_br2" 00:29:29.416 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@169 -- # true 00:29:29.416 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:29:29.416 Cannot find device "nvmf_br" 00:29:29.416 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@170 -- # true 00:29:29.416 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:29:29.416 Cannot find device "nvmf_init_if" 00:29:29.416 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@171 -- # true 00:29:29.416 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:29:29.416 Cannot find device "nvmf_init_if2" 00:29:29.416 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@172 -- # true 00:29:29.416 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:29.416 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:29.416 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@173 -- # true 00:29:29.416 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:29.416 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:29.416 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@174 -- # true 00:29:29.416 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:29:29.416 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:29.416 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:29:29.416 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:29.416 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:29.416 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:29.416 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:29.416 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:29.416 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:29:29.416 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:29:29.416 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:29:29.416 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:29:29.416 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:29:29.416 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:29:29.416 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:29:29.416 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:29:29.416 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:29:29.416 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:29.416 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:29.416 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:29.416 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:29:29.416 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:29:29.416 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:29:29.416 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:29:29.416 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:29.674 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:29.674 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:29.674 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:29:29.674 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:29:29.674 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:29:29.674 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:29.674 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:29:29.674 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:29:29.674 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:29.674 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:29:29.674 00:29:29.674 --- 10.0.0.3 ping statistics --- 00:29:29.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:29.674 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:29:29.674 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:29:29.674 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:29:29.674 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:29:29.674 00:29:29.674 --- 10.0.0.4 ping statistics --- 00:29:29.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:29.674 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:29:29.674 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:29.674 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:29.674 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:29:29.674 00:29:29.674 --- 10.0.0.1 ping statistics --- 00:29:29.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:29.674 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:29:29.674 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:29:29.674 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:29.674 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:29:29.674 00:29:29.674 --- 10.0.0.2 ping statistics --- 00:29:29.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:29.674 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:29:29.674 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:29.674 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@461 -- # return 0 00:29:29.674 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:29.674 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:29.674 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:29.674 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:29.674 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:29.674 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:29.674 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:29.674 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@29 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:29:29.674 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:29.674 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:29.674 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:29.674 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@509 -- # nvmfpid=94175 00:29:29.674 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@510 -- # waitforlisten 94175 00:29:29.674 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # '[' -z 94175 ']' 00:29:29.674 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:29:29.674 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:29.674 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:29.674 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:29.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:29.674 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:29.674 12:28:25 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:29.674 [2024-11-25 12:28:25.395293] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:29:29.674 [2024-11-25 12:28:25.395368] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:29.674 [2024-11-25 12:28:25.529257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:29.932 [2024-11-25 12:28:25.571093] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:29.932 [2024-11-25 12:28:25.571135] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:29.932 [2024-11-25 12:28:25.571142] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:29.932 [2024-11-25 12:28:25.571147] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:29.932 [2024-11-25 12:28:25.571152] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:29.932 [2024-11-25 12:28:25.571450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:30.495 12:28:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:30.495 12:28:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@868 -- # return 0 00:29:30.495 12:28:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:30.495 12:28:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:30.495 12:28:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:30.495 12:28:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:30.495 12:28:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@31 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:29:30.495 12:28:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.495 12:28:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:30.495 12:28:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.495 12:28:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@32 -- # rpc_cmd framework_start_init 00:29:30.495 12:28:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.495 12:28:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:30.753 12:28:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.753 12:28:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:30.753 12:28:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.753 12:28:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:30.753 [2024-11-25 12:28:26.425035] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:30.753 12:28:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.753 12:28:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:29:30.753 12:28:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.753 12:28:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:30.753 [2024-11-25 12:28:26.433155] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:29:30.753 12:28:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.753 12:28:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null0 1000 512 00:29:30.753 12:28:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.753 12:28:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:30.753 null0 00:29:30.753 12:28:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.753 12:28:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null1 1000 512 00:29:30.753 12:28:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.753 12:28:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:30.753 null1 00:29:30.753 12:28:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.753 12:28:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null2 1000 512 00:29:30.753 12:28:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.753 12:28:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:30.753 null2 00:29:30.753 12:28:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.753 12:28:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_null_create null3 1000 512 00:29:30.753 12:28:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.753 12:28:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:30.753 null3 00:29:30.753 12:28:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.753 12:28:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@40 -- # rpc_cmd bdev_wait_for_examine 00:29:30.753 12:28:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.753 12:28:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:30.753 12:28:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.753 12:28:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@48 -- # hostpid=94225 00:29:30.753 12:28:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@49 -- # waitforlisten 94225 /tmp/host.sock 00:29:30.753 12:28:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@835 -- # '[' -z 94225 ']' 00:29:30.753 12:28:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:29:30.753 12:28:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:29:30.753 12:28:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:30.753 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:29:30.753 12:28:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:29:30.753 12:28:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:30.753 12:28:26 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:30.753 [2024-11-25 12:28:26.517704] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:29:30.753 [2024-11-25 12:28:26.517790] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94225 ] 00:29:31.010 [2024-11-25 12:28:26.653598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:31.010 [2024-11-25 12:28:26.691960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:31.640 12:28:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:31.640 12:28:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@868 -- # return 0 00:29:31.641 12:28:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:29:31.641 12:28:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@52 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahipid;' EXIT 00:29:31.641 12:28:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@56 -- # avahi-daemon --kill 00:29:31.897 12:28:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@58 -- # avahipid=94255 00:29:31.897 12:28:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@59 -- # sleep 1 00:29:31.897 12:28:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:29:31.897 12:28:27 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@57 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:29:31.897 Process 1070 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:29:31.897 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:29:31.897 Successfully dropped root privileges. 00:29:31.897 avahi-daemon 0.8 starting up. 00:29:31.897 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:29:31.897 Successfully called chroot(). 00:29:32.828 Successfully dropped remaining capabilities. 00:29:32.828 No service file found in /etc/avahi/services. 00:29:32.828 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.4. 00:29:32.828 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:29:32.828 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.3. 00:29:32.828 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:29:32.828 Network interface enumeration completed. 00:29:32.828 Registering new address record for fe80::6084:d4ff:fe9b:2260 on nvmf_tgt_if2.*. 00:29:32.828 Registering new address record for 10.0.0.4 on nvmf_tgt_if2.IPv4. 00:29:32.828 Registering new address record for fe80::3c47:c4ff:feac:c7a5 on nvmf_tgt_if.*. 00:29:32.828 Registering new address record for 10.0.0.3 on nvmf_tgt_if.IPv4. 00:29:32.828 Server startup complete. Host name is fedora39-cloud-1721788873-2326.local. Local service cookie is 1436735123. 00:29:32.828 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:29:32.828 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.828 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:32.828 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.828 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:29:32.828 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.828 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:32.828 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.828 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@114 -- # notify_id=0 00:29:32.828 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@120 -- # get_subsystem_names 00:29:32.828 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:29:32.828 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:29:32.829 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:32.829 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.829 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:32.829 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:29:32.829 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.829 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@120 -- # [[ '' == '' ]] 00:29:32.829 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # get_bdev_list 00:29:32.829 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:32.829 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:29:32.829 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:29:32.829 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.829 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:29:32.829 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:32.829 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.829 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@121 -- # [[ '' == '' ]] 00:29:32.829 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@123 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:29:32.829 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.829 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:32.829 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.829 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # get_subsystem_names 00:29:32.829 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:32.829 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.829 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:29:32.829 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:29:32.829 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:29:32.829 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:32.829 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.829 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@124 -- # [[ '' == '' ]] 00:29:32.829 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # get_bdev_list 00:29:32.829 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:32.829 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.829 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:32.829 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:29:32.829 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:29:32.829 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:29:32.829 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.829 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@125 -- # [[ '' == '' ]] 00:29:32.829 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@127 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:29:32.829 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.829 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:32.829 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.829 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # get_subsystem_names 00:29:32.829 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:29:32.829 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:32.829 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:29:32.829 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.829 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:32.829 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:29:32.829 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.086 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@128 -- # [[ '' == '' ]] 00:29:33.086 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # get_bdev_list 00:29:33.086 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:33.086 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:29:33.086 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.086 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:29:33.086 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:33.086 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:29:33.086 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.086 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@129 -- # [[ '' == '' ]] 00:29:33.086 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@133 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:29:33.086 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.086 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:33.086 [2024-11-25 12:28:28.761587] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:29:33.086 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.086 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:29:33.086 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.086 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:33.086 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.086 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@140 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:29:33.086 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.086 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:33.086 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.086 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@141 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:29:33.086 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.086 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:33.086 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.086 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@145 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:29:33.086 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.086 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:33.086 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.086 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_publish_mdns_prr 00:29:33.086 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.086 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:33.086 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.086 12:28:28 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@149 -- # sleep 5 00:29:33.086 [2024-11-25 12:28:28.825508] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:29:34.015 [2024-11-25 12:28:29.725519] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:29:34.273 [2024-11-25 12:28:30.025533] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:29:34.273 [2024-11-25 12:28:30.025558] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:29:34.273 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:29:34.273 cookie is 0 00:29:34.273 is_local: 1 00:29:34.273 our_own: 0 00:29:34.273 wide_area: 0 00:29:34.273 multicast: 1 00:29:34.273 cached: 1 00:29:34.273 [2024-11-25 12:28:30.125523] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:29:34.273 [2024-11-25 12:28:30.125543] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:29:34.273 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:29:34.273 cookie is 0 00:29:34.273 is_local: 1 00:29:34.273 our_own: 0 00:29:34.273 wide_area: 0 00:29:34.273 multicast: 1 00:29:34.273 cached: 1 00:29:35.204 [2024-11-25 12:28:31.029890] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:29:35.204 [2024-11-25 12:28:31.029922] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:29:35.204 [2024-11-25 12:28:31.029934] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:29:35.462 [2024-11-25 12:28:31.115970] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem mdns0_nvme0 00:29:35.462 [2024-11-25 12:28:31.126463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.462 [2024-11-25 12:28:31.126497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2499870 with addr=10.0.0.4, port=8009 00:29:35.462 [2024-11-25 12:28:31.126511] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:35.462 [2024-11-25 12:28:31.126517] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:35.462 [2024-11-25 12:28:31.126523] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:29:35.462 [2024-11-25 12:28:31.170235] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:29:35.462 [2024-11-25 12:28:31.170778] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x24aff60:1 started. 00:29:35.462 [2024-11-25 12:28:31.172120] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:29:35.462 [2024-11-25 12:28:31.172138] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:29:35.462 [2024-11-25 12:28:31.178563] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x24aff60 was disconnected and freed. delete nvme_qpair. 00:29:36.393 [2024-11-25 12:28:32.126531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.393 [2024-11-25 12:28:32.126584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24afd50 with addr=10.0.0.4, port=8009 00:29:36.394 [2024-11-25 12:28:32.126599] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:36.394 [2024-11-25 12:28:32.126605] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:36.394 [2024-11-25 12:28:32.126611] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:29:37.325 [2024-11-25 12:28:33.126521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.325 [2024-11-25 12:28:33.126572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2499130 with addr=10.0.0.4, port=8009 00:29:37.325 [2024-11-25 12:28:33.126586] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:37.325 [2024-11-25 12:28:33.126592] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:37.325 [2024-11-25 12:28:33.126597] bdev_nvme.c:7547:discovery_poller: *ERROR*: Discovery[10.0.0.4:8009] could not start discovery connect 00:29:38.257 12:28:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@152 -- # check_mdns_request_exists spdk1 10.0.0.4 8009 'not found' 00:29:38.257 12:28:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:29:38.257 12:28:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.4 00:29:38.257 12:28:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:29:38.257 12:28:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local 'check_type=not found' 00:29:38.257 12:28:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:29:38.257 12:28:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:29:38.257 12:28:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:29:38.257 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:29:38.257 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:29:38.257 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:29:38.257 12:28:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:29:38.257 12:28:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:29:38.257 12:28:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:29:38.257 12:28:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:29:38.257 12:28:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:29:38.257 12:28:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:29:38.257 12:28:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:29:38.257 12:28:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:29:38.257 12:28:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:29:38.257 12:28:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # [[ not found == \f\o\u\n\d ]] 00:29:38.257 12:28:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@108 -- # return 0 00:29:38.257 12:28:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@154 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.4 -s 8009 00:29:38.257 12:28:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.257 12:28:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:38.257 [2024-11-25 12:28:33.847036] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 8009 *** 00:29:38.257 [2024-11-25 12:28:33.849192] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:29:38.257 [2024-11-25 12:28:33.849215] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:29:38.257 12:28:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.257 12:28:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@156 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4420 00:29:38.257 12:28:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.257 12:28:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:38.257 [2024-11-25 12:28:33.854946] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:29:38.257 [2024-11-25 12:28:33.855169] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:29:38.257 12:28:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.257 12:28:33 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@157 -- # sleep 1 00:29:38.257 [2024-11-25 12:28:33.986269] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:29:38.257 [2024-11-25 12:28:33.986289] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:29:38.257 [2024-11-25 12:28:34.072332] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:29:38.257 [2024-11-25 12:28:34.130441] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr attached 00:29:38.257 [2024-11-25 12:28:34.130456] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr connected 00:29:38.257 [2024-11-25 12:28:34.130465] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:29:38.515 [2024-11-25 12:28:34.216521] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 new subsystem mdns1_nvme0 00:29:38.515 [2024-11-25 12:28:34.270778] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr was created to 10.0.0.4:4420 00:29:38.515 [2024-11-25 12:28:34.271280] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Connecting qpair 0x24acbc0:1 started. 00:29:38.515 [2024-11-25 12:28:34.272400] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns1_nvme0 done 00:29:38.515 [2024-11-25 12:28:34.272416] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:29:38.515 [2024-11-25 12:28:34.279230] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpair 0x24acbc0 was disconnected and freed. delete nvme_qpair. 00:29:39.079 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@160 -- # check_mdns_request_exists spdk1 10.0.0.4 8009 found 00:29:39.079 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:29:39.079 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.4 00:29:39.079 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:29:39.079 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local check_type=found 00:29:39.079 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:29:39.079 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:29:39.079 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:29:39.079 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:29:39.079 +;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:29:39.079 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:29:39.079 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:29:39.079 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:29:39.079 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:29:39.079 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:29:39.079 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:29:39.079 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:29:39.079 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:29:39.079 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\4* ]] 00:29:39.079 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:29:39.079 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:29:39.079 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:29:39.079 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:29:39.079 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\4* ]] 00:29:39.079 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:29:39.079 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:29:39.079 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:29:39.079 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:29:39.079 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\4* ]] 00:29:39.079 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\8\0\0\9* ]] 00:29:39.079 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ found == \f\o\u\n\d ]] 00:29:39.079 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@98 -- # return 0 00:29:39.079 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # get_mdns_discovery_svcs 00:29:39.079 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:29:39.079 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:29:39.079 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:29:39.079 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.079 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:29:39.079 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:39.079 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.079 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@162 -- # [[ mdns == \m\d\n\s ]] 00:29:39.079 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@163 -- # get_discovery_ctrlrs 00:29:39.079 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:39.079 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:29:39.079 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:29:39.079 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:29:39.079 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.079 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:39.079 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.337 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@163 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:29:39.337 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:29:39.337 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:39.337 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:29:39.337 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:29:39.337 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:29:39.337 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.337 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:39.337 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.337 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:29:39.337 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:29:39.337 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:29:39.337 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:39.337 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:29:39.337 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.337 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:29:39.337 12:28:34 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:39.337 12:28:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.337 12:28:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:29:39.337 [2024-11-25 12:28:35.025581] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:29:39.337 [2024-11-25 12:28:35.025606] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:29:39.337 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:29:39.337 cookie is 0 00:29:39.337 is_local: 1 00:29:39.337 our_own: 0 00:29:39.337 wide_area: 0 00:29:39.337 multicast: 1 00:29:39.337 cached: 1 00:29:39.337 [2024-11-25 12:28:35.025614] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:29:39.337 12:28:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:29:39.337 12:28:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:29:39.337 12:28:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.337 12:28:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:39.337 12:28:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:39.337 12:28:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:29:39.337 12:28:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:29:39.337 12:28:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.337 12:28:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@166 -- # [[ 4420 == \4\4\2\0 ]] 00:29:39.337 12:28:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:29:39.337 12:28:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:39.337 12:28:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:29:39.337 12:28:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.337 12:28:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:29:39.337 12:28:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:39.337 12:28:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:29:39.337 12:28:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.337 12:28:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@167 -- # [[ 4420 == \4\4\2\0 ]] 00:29:39.337 12:28:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@168 -- # get_notification_count 00:29:39.337 12:28:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:29:39.337 12:28:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:29:39.337 12:28:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.337 12:28:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:39.337 12:28:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.337 12:28:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=2 00:29:39.337 12:28:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=2 00:29:39.337 12:28:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@169 -- # [[ 2 == 2 ]] 00:29:39.337 12:28:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@172 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:29:39.337 12:28:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.337 12:28:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:39.337 12:28:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.337 12:28:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@173 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:29:39.337 12:28:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.337 12:28:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:39.337 [2024-11-25 12:28:35.141819] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x24b5220:1 started. 00:29:39.337 12:28:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.337 12:28:35 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@174 -- # sleep 1 00:29:39.337 [2024-11-25 12:28:35.144125] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Connecting qpair 0x2535810:1 started. 00:29:39.337 [2024-11-25 12:28:35.149528] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x24b5220 was disconnected and freed. delete nvme_qpair. 00:29:39.337 [2024-11-25 12:28:35.149904] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpair 0x2535810 was disconnected and freed. delete nvme_qpair. 00:29:39.595 [2024-11-25 12:28:35.325580] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:29:39.595 [2024-11-25 12:28:35.325598] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:29:39.595 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:29:39.595 cookie is 0 00:29:39.595 is_local: 1 00:29:39.595 our_own: 0 00:29:39.595 wide_area: 0 00:29:39.595 multicast: 1 00:29:39.595 cached: 1 00:29:39.595 [2024-11-25 12:28:35.325606] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.4 trid->trsvcid: 8009 00:29:40.526 12:28:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:29:40.526 12:28:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:29:40.526 12:28:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:29:40.526 12:28:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:40.526 12:28:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:29:40.526 12:28:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.526 12:28:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:40.526 12:28:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.526 12:28:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@176 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:29:40.526 12:28:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@177 -- # get_notification_count 00:29:40.526 12:28:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:40.526 12:28:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.526 12:28:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:29:40.526 12:28:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:40.526 12:28:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.526 12:28:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=2 00:29:40.526 12:28:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:29:40.526 12:28:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@178 -- # [[ 2 == 2 ]] 00:29:40.526 12:28:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@182 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:29:40.526 12:28:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.526 12:28:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:40.526 [2024-11-25 12:28:36.223820] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:29:40.526 [2024-11-25 12:28:36.224445] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:29:40.526 [2024-11-25 12:28:36.224502] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:29:40.526 [2024-11-25 12:28:36.224526] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:29:40.526 [2024-11-25 12:28:36.224535] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:29:40.526 12:28:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.526 12:28:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@183 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4421 00:29:40.526 12:28:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.526 12:28:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:40.526 [2024-11-25 12:28:36.231722] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4421 *** 00:29:40.526 [2024-11-25 12:28:36.232445] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:29:40.526 [2024-11-25 12:28:36.232473] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:29:40.526 12:28:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.526 12:28:36 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@184 -- # sleep 1 00:29:40.526 [2024-11-25 12:28:36.363525] bdev_nvme.c:7403:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for mdns0_nvme0 00:29:40.526 [2024-11-25 12:28:36.363901] bdev_nvme.c:7403:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 new path for mdns1_nvme0 00:29:40.783 [2024-11-25 12:28:36.424095] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 2] ctrlr was created to 10.0.0.4:4421 00:29:40.783 [2024-11-25 12:28:36.424202] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns1_nvme0 done 00:29:40.783 [2024-11-25 12:28:36.424248] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:29:40.783 [2024-11-25 12:28:36.424268] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:29:40.783 [2024-11-25 12:28:36.424295] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:29:40.783 [2024-11-25 12:28:36.424924] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:29:40.783 [2024-11-25 12:28:36.425000] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:29:40.783 [2024-11-25 12:28:36.425043] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:29:40.783 [2024-11-25 12:28:36.425062] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:29:40.783 [2024-11-25 12:28:36.425088] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:29:40.783 [2024-11-25 12:28:36.469593] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 found again 00:29:40.783 [2024-11-25 12:28:36.469662] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:29:40.783 [2024-11-25 12:28:36.470590] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:29:40.783 [2024-11-25 12:28:36.470643] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:29:41.718 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # get_subsystem_names 00:29:41.718 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:41.718 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:29:41.718 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:29:41.718 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:29:41.718 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.718 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:41.718 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.718 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:29:41.718 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:29:41.718 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:41.718 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.718 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:41.718 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:29:41.718 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:29:41.718 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:29:41.718 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.718 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:29:41.718 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@188 -- # get_subsystem_paths mdns0_nvme0 00:29:41.718 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:29:41.718 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.718 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:41.718 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:29:41.718 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:41.718 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:29:41.718 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.718 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@188 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:29:41.718 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@189 -- # get_subsystem_paths mdns1_nvme0 00:29:41.718 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:29:41.718 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.718 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:41.718 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:41.718 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:29:41.718 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:29:41.718 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.718 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@189 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:29:41.718 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@190 -- # get_notification_count 00:29:41.718 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:29:41.718 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:29:41.718 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.718 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:41.718 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.718 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=0 00:29:41.718 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:29:41.718 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@191 -- # [[ 0 == 0 ]] 00:29:41.719 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@195 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:29:41.719 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.719 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:41.719 [2024-11-25 12:28:37.420658] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:29:41.719 [2024-11-25 12:28:37.420682] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:29:41.719 [2024-11-25 12:28:37.420705] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:29:41.719 [2024-11-25 12:28:37.420714] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:29:41.719 [2024-11-25 12:28:37.421564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:41.719 [2024-11-25 12:28:37.421584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.719 [2024-11-25 12:28:37.421590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:41.719 [2024-11-25 12:28:37.421595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.719 [2024-11-25 12:28:37.421600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:41.719 [2024-11-25 12:28:37.421606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.719 [2024-11-25 12:28:37.421611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:41.719 [2024-11-25 12:28:37.421615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.719 [2024-11-25 12:28:37.421619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248c550 is same with the state(6) to be set 00:29:41.719 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.719 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@196 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.4 -s 4420 00:29:41.719 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.719 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:41.719 [2024-11-25 12:28:37.428661] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:29:41.719 [2024-11-25 12:28:37.428688] bdev_nvme.c:7461:discovery_aer_cb: *INFO*: Discovery[10.0.0.4:8009] got aer 00:29:41.719 [2024-11-25 12:28:37.431530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248c550 (9): Bad file descriptor 00:29:41.719 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.719 12:28:37 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@197 -- # sleep 1 00:29:41.719 [2024-11-25 12:28:37.434290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:41.719 [2024-11-25 12:28:37.434305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.719 [2024-11-25 12:28:37.434311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:41.719 [2024-11-25 12:28:37.434315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.719 [2024-11-25 12:28:37.434320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:41.719 [2024-11-25 12:28:37.434325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.719 [2024-11-25 12:28:37.434330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:41.719 [2024-11-25 12:28:37.434335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:41.719 [2024-11-25 12:28:37.434339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2489320 is same with the state(6) to be set 00:29:41.719 [2024-11-25 12:28:37.441542] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:41.719 [2024-11-25 12:28:37.441553] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:41.719 [2024-11-25 12:28:37.441558] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:41.719 [2024-11-25 12:28:37.441561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:41.719 [2024-11-25 12:28:37.441584] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:41.719 [2024-11-25 12:28:37.441641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.719 [2024-11-25 12:28:37.441650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x248c550 with addr=10.0.0.3, port=4420 00:29:41.719 [2024-11-25 12:28:37.441655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248c550 is same with the state(6) to be set 00:29:41.719 [2024-11-25 12:28:37.441664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248c550 (9): Bad file descriptor 00:29:41.719 [2024-11-25 12:28:37.441681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:41.719 [2024-11-25 12:28:37.441686] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:41.719 [2024-11-25 12:28:37.441692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:41.719 [2024-11-25 12:28:37.441696] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:41.719 [2024-11-25 12:28:37.441699] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:41.719 [2024-11-25 12:28:37.441702] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:41.719 [2024-11-25 12:28:37.444272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2489320 (9): Bad file descriptor 00:29:41.719 [2024-11-25 12:28:37.451589] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:41.719 [2024-11-25 12:28:37.451599] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:41.719 [2024-11-25 12:28:37.451602] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:41.719 [2024-11-25 12:28:37.451604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:41.719 [2024-11-25 12:28:37.451619] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:41.719 [2024-11-25 12:28:37.451644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.719 [2024-11-25 12:28:37.451651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x248c550 with addr=10.0.0.3, port=4420 00:29:41.719 [2024-11-25 12:28:37.451656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248c550 is same with the state(6) to be set 00:29:41.719 [2024-11-25 12:28:37.451664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248c550 (9): Bad file descriptor 00:29:41.719 [2024-11-25 12:28:37.451671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:41.719 [2024-11-25 12:28:37.451676] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:41.719 [2024-11-25 12:28:37.451680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:41.719 [2024-11-25 12:28:37.451684] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:41.719 [2024-11-25 12:28:37.451687] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:41.719 [2024-11-25 12:28:37.451689] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:41.719 [2024-11-25 12:28:37.454276] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:29:41.719 [2024-11-25 12:28:37.454286] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:29:41.719 [2024-11-25 12:28:37.454288] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:29:41.719 [2024-11-25 12:28:37.454291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:29:41.719 [2024-11-25 12:28:37.454306] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:29:41.719 [2024-11-25 12:28:37.454329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.719 [2024-11-25 12:28:37.454336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2489320 with addr=10.0.0.4, port=4420 00:29:41.719 [2024-11-25 12:28:37.454341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2489320 is same with the state(6) to be set 00:29:41.719 [2024-11-25 12:28:37.454347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2489320 (9): Bad file descriptor 00:29:41.719 [2024-11-25 12:28:37.454354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:29:41.719 [2024-11-25 12:28:37.454358] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:29:41.719 [2024-11-25 12:28:37.454363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:29:41.719 [2024-11-25 12:28:37.454366] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:29:41.719 [2024-11-25 12:28:37.454369] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:29:41.719 [2024-11-25 12:28:37.454371] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:29:41.719 [2024-11-25 12:28:37.461627] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:41.719 [2024-11-25 12:28:37.461639] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:41.719 [2024-11-25 12:28:37.461641] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:41.719 [2024-11-25 12:28:37.461643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:41.719 [2024-11-25 12:28:37.461657] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:41.719 [2024-11-25 12:28:37.461679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.719 [2024-11-25 12:28:37.461686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x248c550 with addr=10.0.0.3, port=4420 00:29:41.719 [2024-11-25 12:28:37.461690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248c550 is same with the state(6) to be set 00:29:41.719 [2024-11-25 12:28:37.461698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248c550 (9): Bad file descriptor 00:29:41.720 [2024-11-25 12:28:37.461714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:41.720 [2024-11-25 12:28:37.461719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:41.720 [2024-11-25 12:28:37.461723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:41.720 [2024-11-25 12:28:37.461727] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:41.720 [2024-11-25 12:28:37.461729] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:41.720 [2024-11-25 12:28:37.461731] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:41.720 [2024-11-25 12:28:37.464311] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:29:41.720 [2024-11-25 12:28:37.464323] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:29:41.720 [2024-11-25 12:28:37.464325] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:29:41.720 [2024-11-25 12:28:37.464328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:29:41.720 [2024-11-25 12:28:37.464341] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:29:41.720 [2024-11-25 12:28:37.464361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.720 [2024-11-25 12:28:37.464368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2489320 with addr=10.0.0.4, port=4420 00:29:41.720 [2024-11-25 12:28:37.464373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2489320 is same with the state(6) to be set 00:29:41.720 [2024-11-25 12:28:37.464380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2489320 (9): Bad file descriptor 00:29:41.720 [2024-11-25 12:28:37.464390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:29:41.720 [2024-11-25 12:28:37.464394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:29:41.720 [2024-11-25 12:28:37.464399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:29:41.720 [2024-11-25 12:28:37.464402] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:29:41.720 [2024-11-25 12:28:37.464405] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:29:41.720 [2024-11-25 12:28:37.464407] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:29:41.720 [2024-11-25 12:28:37.471664] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:41.720 [2024-11-25 12:28:37.471677] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:41.720 [2024-11-25 12:28:37.471679] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:41.720 [2024-11-25 12:28:37.471681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:41.720 [2024-11-25 12:28:37.471692] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:41.720 [2024-11-25 12:28:37.471712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.720 [2024-11-25 12:28:37.471719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x248c550 with addr=10.0.0.3, port=4420 00:29:41.720 [2024-11-25 12:28:37.471724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248c550 is same with the state(6) to be set 00:29:41.720 [2024-11-25 12:28:37.471731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248c550 (9): Bad file descriptor 00:29:41.720 [2024-11-25 12:28:37.471747] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:41.720 [2024-11-25 12:28:37.471759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:41.720 [2024-11-25 12:28:37.471764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:41.720 [2024-11-25 12:28:37.471767] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:41.720 [2024-11-25 12:28:37.471770] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:41.720 [2024-11-25 12:28:37.471772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:41.720 [2024-11-25 12:28:37.474347] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:29:41.720 [2024-11-25 12:28:37.474360] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:29:41.720 [2024-11-25 12:28:37.474363] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:29:41.720 [2024-11-25 12:28:37.474365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:29:41.720 [2024-11-25 12:28:37.474379] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:29:41.720 [2024-11-25 12:28:37.474402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.720 [2024-11-25 12:28:37.474409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2489320 with addr=10.0.0.4, port=4420 00:29:41.720 [2024-11-25 12:28:37.474414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2489320 is same with the state(6) to be set 00:29:41.720 [2024-11-25 12:28:37.474421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2489320 (9): Bad file descriptor 00:29:41.720 [2024-11-25 12:28:37.474428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:29:41.720 [2024-11-25 12:28:37.474432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:29:41.720 [2024-11-25 12:28:37.474436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:29:41.720 [2024-11-25 12:28:37.474440] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:29:41.720 [2024-11-25 12:28:37.474443] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:29:41.720 [2024-11-25 12:28:37.474445] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:29:41.720 [2024-11-25 12:28:37.481697] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:41.720 [2024-11-25 12:28:37.481710] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:41.720 [2024-11-25 12:28:37.481712] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:41.720 [2024-11-25 12:28:37.481715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:41.720 [2024-11-25 12:28:37.481726] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:41.720 [2024-11-25 12:28:37.481755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.720 [2024-11-25 12:28:37.481763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x248c550 with addr=10.0.0.3, port=4420 00:29:41.720 [2024-11-25 12:28:37.481768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248c550 is same with the state(6) to be set 00:29:41.720 [2024-11-25 12:28:37.481775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248c550 (9): Bad file descriptor 00:29:41.720 [2024-11-25 12:28:37.481789] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:41.720 [2024-11-25 12:28:37.481793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:41.720 [2024-11-25 12:28:37.481797] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:41.720 [2024-11-25 12:28:37.481801] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:41.720 [2024-11-25 12:28:37.481804] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:41.720 [2024-11-25 12:28:37.481806] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:41.720 [2024-11-25 12:28:37.484384] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:29:41.720 [2024-11-25 12:28:37.484401] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:29:41.720 [2024-11-25 12:28:37.484404] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:29:41.720 [2024-11-25 12:28:37.484406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:29:41.720 [2024-11-25 12:28:37.484417] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:29:41.720 [2024-11-25 12:28:37.484438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.720 [2024-11-25 12:28:37.484445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2489320 with addr=10.0.0.4, port=4420 00:29:41.720 [2024-11-25 12:28:37.484449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2489320 is same with the state(6) to be set 00:29:41.720 [2024-11-25 12:28:37.484456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2489320 (9): Bad file descriptor 00:29:41.720 [2024-11-25 12:28:37.484463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:29:41.720 [2024-11-25 12:28:37.484466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:29:41.720 [2024-11-25 12:28:37.484471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:29:41.720 [2024-11-25 12:28:37.484474] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:29:41.720 [2024-11-25 12:28:37.484477] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:29:41.720 [2024-11-25 12:28:37.484479] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:29:41.720 [2024-11-25 12:28:37.491731] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:41.720 [2024-11-25 12:28:37.491742] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:41.720 [2024-11-25 12:28:37.491745] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:41.720 [2024-11-25 12:28:37.491747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:41.720 [2024-11-25 12:28:37.491764] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:41.720 [2024-11-25 12:28:37.491784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.720 [2024-11-25 12:28:37.491791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x248c550 with addr=10.0.0.3, port=4420 00:29:41.720 [2024-11-25 12:28:37.491796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248c550 is same with the state(6) to be set 00:29:41.721 [2024-11-25 12:28:37.491802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248c550 (9): Bad file descriptor 00:29:41.721 [2024-11-25 12:28:37.491817] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:41.721 [2024-11-25 12:28:37.491821] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:41.721 [2024-11-25 12:28:37.491826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:41.721 [2024-11-25 12:28:37.491830] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:41.721 [2024-11-25 12:28:37.491833] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:41.721 [2024-11-25 12:28:37.491835] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:41.721 [2024-11-25 12:28:37.494423] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:29:41.721 [2024-11-25 12:28:37.494434] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:29:41.721 [2024-11-25 12:28:37.494437] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:29:41.721 [2024-11-25 12:28:37.494439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:29:41.721 [2024-11-25 12:28:37.494453] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:29:41.721 [2024-11-25 12:28:37.494473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.721 [2024-11-25 12:28:37.494480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2489320 with addr=10.0.0.4, port=4420 00:29:41.721 [2024-11-25 12:28:37.494484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2489320 is same with the state(6) to be set 00:29:41.721 [2024-11-25 12:28:37.494492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2489320 (9): Bad file descriptor 00:29:41.721 [2024-11-25 12:28:37.494499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:29:41.721 [2024-11-25 12:28:37.494503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:29:41.721 [2024-11-25 12:28:37.494508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:29:41.721 [2024-11-25 12:28:37.494511] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:29:41.721 [2024-11-25 12:28:37.494514] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:29:41.721 [2024-11-25 12:28:37.494516] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:29:41.721 [2024-11-25 12:28:37.501770] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:41.721 [2024-11-25 12:28:37.501782] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:41.721 [2024-11-25 12:28:37.501784] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:41.721 [2024-11-25 12:28:37.501786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:41.721 [2024-11-25 12:28:37.501797] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:41.721 [2024-11-25 12:28:37.501816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.721 [2024-11-25 12:28:37.501823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x248c550 with addr=10.0.0.3, port=4420 00:29:41.721 [2024-11-25 12:28:37.501827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248c550 is same with the state(6) to be set 00:29:41.721 [2024-11-25 12:28:37.501834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248c550 (9): Bad file descriptor 00:29:41.721 [2024-11-25 12:28:37.501850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:41.721 [2024-11-25 12:28:37.501854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:41.721 [2024-11-25 12:28:37.501859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:41.721 [2024-11-25 12:28:37.501862] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:41.721 [2024-11-25 12:28:37.501865] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:41.721 [2024-11-25 12:28:37.501867] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:41.721 [2024-11-25 12:28:37.504458] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:29:41.721 [2024-11-25 12:28:37.504471] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:29:41.721 [2024-11-25 12:28:37.504473] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:29:41.721 [2024-11-25 12:28:37.504476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:29:41.721 [2024-11-25 12:28:37.504486] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:29:41.721 [2024-11-25 12:28:37.504505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.721 [2024-11-25 12:28:37.504512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2489320 with addr=10.0.0.4, port=4420 00:29:41.721 [2024-11-25 12:28:37.504516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2489320 is same with the state(6) to be set 00:29:41.721 [2024-11-25 12:28:37.504523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2489320 (9): Bad file descriptor 00:29:41.721 [2024-11-25 12:28:37.504530] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:29:41.721 [2024-11-25 12:28:37.504533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:29:41.721 [2024-11-25 12:28:37.504538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:29:41.721 [2024-11-25 12:28:37.504542] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:29:41.721 [2024-11-25 12:28:37.504544] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:29:41.721 [2024-11-25 12:28:37.504546] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:29:41.721 [2024-11-25 12:28:37.511802] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:41.721 [2024-11-25 12:28:37.511815] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:41.721 [2024-11-25 12:28:37.511817] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:41.721 [2024-11-25 12:28:37.511820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:41.721 [2024-11-25 12:28:37.511830] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:41.721 [2024-11-25 12:28:37.511851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.721 [2024-11-25 12:28:37.511858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x248c550 with addr=10.0.0.3, port=4420 00:29:41.721 [2024-11-25 12:28:37.511862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248c550 is same with the state(6) to be set 00:29:41.721 [2024-11-25 12:28:37.511869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248c550 (9): Bad file descriptor 00:29:41.721 [2024-11-25 12:28:37.511885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:41.721 [2024-11-25 12:28:37.511889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:41.721 [2024-11-25 12:28:37.511893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:41.721 [2024-11-25 12:28:37.511897] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:41.721 [2024-11-25 12:28:37.511899] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:41.721 [2024-11-25 12:28:37.511902] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:41.721 [2024-11-25 12:28:37.514493] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:29:41.721 [2024-11-25 12:28:37.514504] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:29:41.721 [2024-11-25 12:28:37.514507] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:29:41.721 [2024-11-25 12:28:37.514509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:29:41.721 [2024-11-25 12:28:37.514521] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:29:41.721 [2024-11-25 12:28:37.514542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.721 [2024-11-25 12:28:37.514549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2489320 with addr=10.0.0.4, port=4420 00:29:41.721 [2024-11-25 12:28:37.514554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2489320 is same with the state(6) to be set 00:29:41.721 [2024-11-25 12:28:37.514561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2489320 (9): Bad file descriptor 00:29:41.721 [2024-11-25 12:28:37.514567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:29:41.721 [2024-11-25 12:28:37.514571] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:29:41.721 [2024-11-25 12:28:37.514576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:29:41.721 [2024-11-25 12:28:37.514580] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:29:41.721 [2024-11-25 12:28:37.514582] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:29:41.721 [2024-11-25 12:28:37.514585] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:29:41.721 [2024-11-25 12:28:37.521836] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:41.721 [2024-11-25 12:28:37.521850] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:41.721 [2024-11-25 12:28:37.521853] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:41.721 [2024-11-25 12:28:37.521856] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:41.721 [2024-11-25 12:28:37.521867] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:41.721 [2024-11-25 12:28:37.521891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.721 [2024-11-25 12:28:37.521898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x248c550 with addr=10.0.0.3, port=4420 00:29:41.721 [2024-11-25 12:28:37.521903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248c550 is same with the state(6) to be set 00:29:41.722 [2024-11-25 12:28:37.521910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248c550 (9): Bad file descriptor 00:29:41.722 [2024-11-25 12:28:37.521926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:41.722 [2024-11-25 12:28:37.521930] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:41.722 [2024-11-25 12:28:37.521934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:41.722 [2024-11-25 12:28:37.521938] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:41.722 [2024-11-25 12:28:37.521941] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:41.722 [2024-11-25 12:28:37.521943] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:41.722 [2024-11-25 12:28:37.524527] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:29:41.722 [2024-11-25 12:28:37.524539] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:29:41.722 [2024-11-25 12:28:37.524542] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:29:41.722 [2024-11-25 12:28:37.524544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:29:41.722 [2024-11-25 12:28:37.524554] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:29:41.722 [2024-11-25 12:28:37.524575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.722 [2024-11-25 12:28:37.524582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2489320 with addr=10.0.0.4, port=4420 00:29:41.722 [2024-11-25 12:28:37.524586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2489320 is same with the state(6) to be set 00:29:41.722 [2024-11-25 12:28:37.524594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2489320 (9): Bad file descriptor 00:29:41.722 [2024-11-25 12:28:37.524601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:29:41.722 [2024-11-25 12:28:37.524605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:29:41.722 [2024-11-25 12:28:37.524609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:29:41.722 [2024-11-25 12:28:37.524613] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:29:41.722 [2024-11-25 12:28:37.524615] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:29:41.722 [2024-11-25 12:28:37.524617] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:29:41.722 [2024-11-25 12:28:37.531873] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:41.722 [2024-11-25 12:28:37.531885] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:41.722 [2024-11-25 12:28:37.531887] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:41.722 [2024-11-25 12:28:37.531890] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:41.722 [2024-11-25 12:28:37.531900] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:41.722 [2024-11-25 12:28:37.531921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.722 [2024-11-25 12:28:37.531928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x248c550 with addr=10.0.0.3, port=4420 00:29:41.722 [2024-11-25 12:28:37.531932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248c550 is same with the state(6) to be set 00:29:41.722 [2024-11-25 12:28:37.531939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248c550 (9): Bad file descriptor 00:29:41.722 [2024-11-25 12:28:37.531954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:41.722 [2024-11-25 12:28:37.531958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:41.722 [2024-11-25 12:28:37.531963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:41.722 [2024-11-25 12:28:37.531966] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:41.722 [2024-11-25 12:28:37.531969] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:41.722 [2024-11-25 12:28:37.531971] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:41.722 [2024-11-25 12:28:37.534561] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:29:41.722 [2024-11-25 12:28:37.534574] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:29:41.722 [2024-11-25 12:28:37.534576] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:29:41.722 [2024-11-25 12:28:37.534579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:29:41.722 [2024-11-25 12:28:37.534591] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:29:41.722 [2024-11-25 12:28:37.534611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.722 [2024-11-25 12:28:37.534617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2489320 with addr=10.0.0.4, port=4420 00:29:41.722 [2024-11-25 12:28:37.534622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2489320 is same with the state(6) to be set 00:29:41.722 [2024-11-25 12:28:37.534629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2489320 (9): Bad file descriptor 00:29:41.722 [2024-11-25 12:28:37.534635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:29:41.722 [2024-11-25 12:28:37.534640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:29:41.722 [2024-11-25 12:28:37.534644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:29:41.722 [2024-11-25 12:28:37.534648] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:29:41.722 [2024-11-25 12:28:37.534650] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:29:41.722 [2024-11-25 12:28:37.534653] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:29:41.722 [2024-11-25 12:28:37.541906] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:41.722 [2024-11-25 12:28:37.541918] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:41.722 [2024-11-25 12:28:37.541920] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:41.722 [2024-11-25 12:28:37.541923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:41.722 [2024-11-25 12:28:37.541933] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:41.722 [2024-11-25 12:28:37.541953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.722 [2024-11-25 12:28:37.541959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x248c550 with addr=10.0.0.3, port=4420 00:29:41.722 [2024-11-25 12:28:37.541964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248c550 is same with the state(6) to be set 00:29:41.722 [2024-11-25 12:28:37.541972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248c550 (9): Bad file descriptor 00:29:41.722 [2024-11-25 12:28:37.541988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:41.722 [2024-11-25 12:28:37.541993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:41.722 [2024-11-25 12:28:37.541997] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:41.722 [2024-11-25 12:28:37.542001] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:41.722 [2024-11-25 12:28:37.542003] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:41.722 [2024-11-25 12:28:37.542005] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:41.722 [2024-11-25 12:28:37.544598] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:29:41.722 [2024-11-25 12:28:37.544610] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:29:41.722 [2024-11-25 12:28:37.544613] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:29:41.722 [2024-11-25 12:28:37.544615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:29:41.722 [2024-11-25 12:28:37.544625] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:29:41.722 [2024-11-25 12:28:37.544646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.722 [2024-11-25 12:28:37.544653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2489320 with addr=10.0.0.4, port=4420 00:29:41.722 [2024-11-25 12:28:37.544657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2489320 is same with the state(6) to be set 00:29:41.722 [2024-11-25 12:28:37.544664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2489320 (9): Bad file descriptor 00:29:41.722 [2024-11-25 12:28:37.544671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:29:41.722 [2024-11-25 12:28:37.544674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:29:41.722 [2024-11-25 12:28:37.544680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:29:41.722 [2024-11-25 12:28:37.544684] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:29:41.722 [2024-11-25 12:28:37.544686] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:29:41.722 [2024-11-25 12:28:37.544688] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:29:41.722 [2024-11-25 12:28:37.551940] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:29:41.722 [2024-11-25 12:28:37.551951] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:29:41.722 [2024-11-25 12:28:37.551954] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:29:41.722 [2024-11-25 12:28:37.551957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:41.723 [2024-11-25 12:28:37.551966] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:29:41.723 [2024-11-25 12:28:37.551986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.723 [2024-11-25 12:28:37.551992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x248c550 with addr=10.0.0.3, port=4420 00:29:41.723 [2024-11-25 12:28:37.551997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248c550 is same with the state(6) to be set 00:29:41.723 [2024-11-25 12:28:37.552004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248c550 (9): Bad file descriptor 00:29:41.723 [2024-11-25 12:28:37.552020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:41.723 [2024-11-25 12:28:37.552024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:41.723 [2024-11-25 12:28:37.552028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:41.723 [2024-11-25 12:28:37.552032] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:29:41.723 [2024-11-25 12:28:37.552035] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:29:41.723 [2024-11-25 12:28:37.552037] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:41.723 [2024-11-25 12:28:37.554631] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Delete qpairs for reset. 00:29:41.723 [2024-11-25 12:28:37.554643] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] qpairs were deleted. 00:29:41.723 [2024-11-25 12:28:37.554646] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start disconnecting ctrlr. 00:29:41.723 [2024-11-25 12:28:37.554648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20, 1] resetting controller 00:29:41.723 [2024-11-25 12:28:37.554661] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Start reconnecting ctrlr. 00:29:41.723 [2024-11-25 12:28:37.554681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.723 [2024-11-25 12:28:37.554688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2489320 with addr=10.0.0.4, port=4420 00:29:41.723 [2024-11-25 12:28:37.554693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2489320 is same with the state(6) to be set 00:29:41.723 [2024-11-25 12:28:37.554700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2489320 (9): Bad file descriptor 00:29:41.723 [2024-11-25 12:28:37.554706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Ctrlr is in error state 00:29:41.723 [2024-11-25 12:28:37.554711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] controller reinitialization failed 00:29:41.723 [2024-11-25 12:28:37.554715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] in failed state. 00:29:41.723 [2024-11-25 12:28:37.554719] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] ctrlr could not be connected. 00:29:41.723 [2024-11-25 12:28:37.554722] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode20, 1] Clear pending resets. 00:29:41.723 [2024-11-25 12:28:37.554724] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode20, 1] Resetting controller failed. 00:29:41.723 [2024-11-25 12:28:37.560026] bdev_nvme.c:7266:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:29:41.723 [2024-11-25 12:28:37.560044] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:29:41.723 [2024-11-25 12:28:37.560054] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:29:41.723 [2024-11-25 12:28:37.560072] bdev_nvme.c:7266:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4420 not found 00:29:41.723 [2024-11-25 12:28:37.560081] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:29:41.723 [2024-11-25 12:28:37.560088] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:29:41.980 [2024-11-25 12:28:37.646085] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:29:41.980 [2024-11-25 12:28:37.646117] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:29:42.914 12:28:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # get_subsystem_names 00:29:42.914 12:28:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:42.914 12:28:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:29:42.914 12:28:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:29:42.914 12:28:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:29:42.914 12:28:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.914 12:28:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:42.914 12:28:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.914 12:28:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@199 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:29:42.914 12:28:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@200 -- # get_bdev_list 00:29:42.914 12:28:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:29:42.914 12:28:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:29:42.914 12:28:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:42.914 12:28:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:29:42.914 12:28:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.914 12:28:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:42.914 12:28:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.914 12:28:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@200 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:29:42.914 12:28:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@201 -- # get_subsystem_paths mdns0_nvme0 00:29:42.914 12:28:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:29:42.914 12:28:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.914 12:28:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:42.914 12:28:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:29:42.914 12:28:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:42.914 12:28:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:29:42.914 12:28:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.914 12:28:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@201 -- # [[ 4421 == \4\4\2\1 ]] 00:29:42.914 12:28:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # get_subsystem_paths mdns1_nvme0 00:29:42.914 12:28:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:42.914 12:28:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:29:42.914 12:28:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.914 12:28:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # sort -n 00:29:42.914 12:28:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:42.914 12:28:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@73 -- # xargs 00:29:42.914 12:28:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.914 12:28:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@202 -- # [[ 4421 == \4\4\2\1 ]] 00:29:42.914 12:28:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@203 -- # get_notification_count 00:29:42.914 12:28:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:29:42.914 12:28:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:29:42.914 12:28:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.914 12:28:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:42.914 12:28:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.914 12:28:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=0 00:29:42.914 12:28:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=4 00:29:42.914 12:28:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@204 -- # [[ 0 == 0 ]] 00:29:42.914 12:28:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@206 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:29:42.914 12:28:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.914 12:28:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:42.914 12:28:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.914 12:28:38 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@207 -- # sleep 1 00:29:42.914 [2024-11-25 12:28:38.625607] bdev_mdns_client.c: 425:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:29:43.851 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@209 -- # get_mdns_discovery_svcs 00:29:43.851 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:29:43.851 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:29:43.851 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:29:43.851 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:29:43.851 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.851 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:43.851 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.851 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@209 -- # [[ '' == '' ]] 00:29:43.851 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@210 -- # get_subsystem_names 00:29:43.851 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:43.851 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.851 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:43.851 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # jq -r '.[].name' 00:29:43.851 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # sort 00:29:43.851 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@69 -- # xargs 00:29:43.851 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.851 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@210 -- # [[ '' == '' ]] 00:29:43.851 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@211 -- # get_bdev_list 00:29:43.851 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:29:43.851 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:29:43.851 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:29:43.851 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:43.851 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.851 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:43.851 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.851 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@211 -- # [[ '' == '' ]] 00:29:43.851 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@212 -- # get_notification_count 00:29:43.851 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # jq '. | length' 00:29:43.851 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:29:43.851 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.851 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:44.109 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.109 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@116 -- # notification_count=4 00:29:44.109 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@117 -- # notify_id=8 00:29:44.109 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@213 -- # [[ 4 == 4 ]] 00:29:44.109 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@216 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:29:44.109 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.109 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:44.109 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.109 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@217 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:29:44.109 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # local es=0 00:29:44.109 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:29:44.109 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:44.109 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:44.109 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:44.109 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:44.109 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:29:44.109 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.109 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:44.109 [2024-11-25 12:28:39.765018] bdev_mdns_client.c: 471:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:29:44.109 2024/11/25 12:28:39 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:29:44.109 request: 00:29:44.109 { 00:29:44.109 "method": "bdev_nvme_start_mdns_discovery", 00:29:44.109 "params": { 00:29:44.109 "name": "mdns", 00:29:44.109 "svcname": "_nvme-disc._http", 00:29:44.109 "hostnqn": "nqn.2021-12.io.spdk:test" 00:29:44.109 } 00:29:44.109 } 00:29:44.109 Got JSON-RPC error response 00:29:44.109 GoRPCClient: error on JSON-RPC call 00:29:44.109 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:44.109 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # es=1 00:29:44.109 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:44.109 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:44.109 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:44.109 12:28:39 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@218 -- # sleep 5 00:29:44.782 [2024-11-25 12:28:40.353443] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:29:44.782 [2024-11-25 12:28:40.453440] bdev_mdns_client.c: 396:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:29:44.782 [2024-11-25 12:28:40.553448] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:29:44.782 [2024-11-25 12:28:40.553466] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:29:44.782 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:29:44.782 cookie is 0 00:29:44.782 is_local: 1 00:29:44.782 our_own: 0 00:29:44.782 wide_area: 0 00:29:44.782 multicast: 1 00:29:44.782 cached: 1 00:29:44.782 [2024-11-25 12:28:40.653449] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:29:44.782 [2024-11-25 12:28:40.653471] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.4) 00:29:44.782 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:29:44.782 cookie is 0 00:29:44.782 is_local: 1 00:29:44.782 our_own: 0 00:29:44.782 wide_area: 0 00:29:44.782 multicast: 1 00:29:44.782 cached: 1 00:29:44.782 [2024-11-25 12:28:40.653479] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.4 trid->trsvcid: 8009 00:29:45.040 [2024-11-25 12:28:40.753451] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk1' of type '_nvme-disc._tcp' in domain 'local' 00:29:45.040 [2024-11-25 12:28:40.753469] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:29:45.040 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:29:45.040 cookie is 0 00:29:45.040 is_local: 1 00:29:45.040 our_own: 0 00:29:45.040 wide_area: 0 00:29:45.040 multicast: 1 00:29:45.040 cached: 1 00:29:45.040 [2024-11-25 12:28:40.853452] bdev_mdns_client.c: 255:mdns_resolve_handler: *INFO*: Service 'spdk0' of type '_nvme-disc._tcp' in domain 'local' 00:29:45.040 [2024-11-25 12:28:40.853473] bdev_mdns_client.c: 260:mdns_resolve_handler: *INFO*: fedora39-cloud-1721788873-2326.local:8009 (10.0.0.3) 00:29:45.040 TXT="nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:29:45.040 cookie is 0 00:29:45.040 is_local: 1 00:29:45.040 our_own: 0 00:29:45.040 wide_area: 0 00:29:45.040 multicast: 1 00:29:45.040 cached: 1 00:29:45.040 [2024-11-25 12:28:40.853480] bdev_mdns_client.c: 323:mdns_resolve_handler: *ERROR*: mDNS discovery entry exists already. trid->traddr: 10.0.0.3 trid->trsvcid: 8009 00:29:45.971 [2024-11-25 12:28:41.561420] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr attached 00:29:45.971 [2024-11-25 12:28:41.561448] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.4:8009] discovery ctrlr connected 00:29:45.971 [2024-11-25 12:28:41.561459] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.4:8009] sent discovery log page command 00:29:45.971 [2024-11-25 12:28:41.647495] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 new subsystem mdns0_nvme0 00:29:45.971 [2024-11-25 12:28:41.705726] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] ctrlr was created to 10.0.0.4:4421 00:29:45.971 [2024-11-25 12:28:41.706138] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] Connecting qpair 0x2477230:1 started. 00:29:45.971 [2024-11-25 12:28:41.707319] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.4:8009] attach mdns0_nvme0 done 00:29:45.971 [2024-11-25 12:28:41.707338] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.4:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.4:4421 found again 00:29:45.971 [2024-11-25 12:28:41.710262] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode20, 3] qpair 0x2477230 was disconnected and freed. delete nvme_qpair. 00:29:45.971 [2024-11-25 12:28:41.761188] bdev_nvme.c:7479:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:29:45.971 [2024-11-25 12:28:41.761201] bdev_nvme.c:7565:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:29:45.971 [2024-11-25 12:28:41.761210] bdev_nvme.c:7442:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:29:46.228 [2024-11-25 12:28:41.847270] bdev_nvme.c:7408:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem mdns1_nvme0 00:29:46.228 [2024-11-25 12:28:41.905451] bdev_nvme.c:5635:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:29:46.228 [2024-11-25 12:28:41.905790] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x25e7aa0:1 started. 00:29:46.228 [2024-11-25 12:28:41.906903] bdev_nvme.c:7298:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns1_nvme0 done 00:29:46.228 [2024-11-25 12:28:41.906920] bdev_nvme.c:7257:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:29:46.228 [2024-11-25 12:28:41.910255] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x25e7aa0 was disconnected and freed. delete nvme_qpair. 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@220 -- # get_mdns_discovery_svcs 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # jq -r '.[].name' 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # sort 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@81 -- # xargs 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@220 -- # [[ mdns == \m\d\n\s ]] 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@221 -- # get_discovery_ctrlrs 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@221 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@222 -- # get_bdev_list 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@222 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@225 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@652 -- # local es=0 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:49.505 [2024-11-25 12:28:44.905412] bdev_mdns_client.c: 476:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:29:49.505 2024/11/25 12:28:44 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:29:49.505 request: 00:29:49.505 { 00:29:49.505 "method": "bdev_nvme_start_mdns_discovery", 00:29:49.505 "params": { 00:29:49.505 "name": "cdc", 00:29:49.505 "svcname": "_nvme-disc._tcp", 00:29:49.505 "hostnqn": "nqn.2021-12.io.spdk:test" 00:29:49.505 } 00:29:49.505 } 00:29:49.505 Got JSON-RPC error response 00:29:49.505 GoRPCClient: error on JSON-RPC call 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@655 -- # es=1 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@226 -- # get_discovery_ctrlrs 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # jq -r '.[].name' 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # xargs 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@77 -- # sort 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@226 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@227 -- # get_bdev_list 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # sort 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # jq -r '.[].name' 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@65 -- # xargs 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@227 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@228 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@231 -- # check_mdns_request_exists spdk1 10.0.0.3 8009 found 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.3 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local check_type=found 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:29:49.505 12:28:44 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:29:49.505 12:28:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:29:49.505 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:29:49.505 +;(null);IPv4;spdk1;_nvme-disc._tcp;local 00:29:49.505 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:29:49.505 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:29:49.505 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:29:49.505 =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:29:49.505 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:29:49.505 12:28:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:29:49.505 12:28:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:29:49.505 12:28:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:29:49.505 12:28:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\3* ]] 00:29:49.505 12:28:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:29:49.505 12:28:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:29:49.505 12:28:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:29:49.505 12:28:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:29:49.506 12:28:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk1;_nvme-disc._tcp;local == *\1\0\.\0\.\0\.\3* ]] 00:29:49.506 12:28:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:29:49.506 12:28:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:29:49.506 12:28:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:29:49.506 12:28:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:29:49.506 12:28:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\3* ]] 00:29:49.506 12:28:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:29:49.506 12:28:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:29:49.506 12:28:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:29:49.506 12:28:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:29:49.506 12:28:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\1\0\.\0\.\0\.\3* ]] 00:29:49.506 12:28:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk1;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\8\0\0\9* ]] 00:29:49.506 12:28:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@97 -- # [[ found == \f\o\u\n\d ]] 00:29:49.506 12:28:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@98 -- # return 0 00:29:49.506 12:28:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@232 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:29:49.506 12:28:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:49.506 12:28:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:49.506 12:28:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:49.506 12:28:45 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@234 -- # sleep 1 00:29:49.506 [2024-11-25 12:28:45.053482] bdev_mdns_client.c: 425:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:29:50.439 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@236 -- # check_mdns_request_exists spdk1 10.0.0.3 8009 'not found' 00:29:50.439 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@85 -- # local process=spdk1 00:29:50.439 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@86 -- # local ip=10.0.0.3 00:29:50.439 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@87 -- # local port=8009 00:29:50.439 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@88 -- # local 'check_type=not found' 00:29:50.439 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@89 -- # local output 00:29:50.439 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # avahi-browse -t -r _nvme-disc._tcp -p 00:29:50.439 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@92 -- # output='+;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:29:50.439 +;(null);IPv4;spdk0;_nvme-disc._tcp;local 00:29:50.439 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" 00:29:50.439 =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp"' 00:29:50.439 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@93 -- # readarray -t lines 00:29:50.439 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:29:50.439 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:29:50.439 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:29:50.439 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ +;(null);IPv4;spdk0;_nvme-disc._tcp;local == *\s\p\d\k\1* ]] 00:29:50.439 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:29:50.439 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.4;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:29:50.439 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@95 -- # for line in "${lines[@]}" 00:29:50.439 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@96 -- # [[ =;(null);IPv4;spdk0;_nvme-disc._tcp;local;fedora39-cloud-1721788873-2326.local;10.0.0.3;8009;"nqn=nqn.2014-08.org.nvmexpress.discovery" "p=tcp" == *\s\p\d\k\1* ]] 00:29:50.439 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@105 -- # [[ not found == \f\o\u\n\d ]] 00:29:50.439 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@108 -- # return 0 00:29:50.439 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@238 -- # rpc_cmd nvmf_stop_mdns_prr 00:29:50.439 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.439 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:50.439 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.439 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@240 -- # trap - SIGINT SIGTERM EXIT 00:29:50.439 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@242 -- # kill 94225 00:29:50.439 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@245 -- # wait 94225 00:29:50.439 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@246 -- # kill 94255 00:29:50.439 Got SIGTERM, quitting. 00:29:50.439 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- host/mdns_discovery.sh@247 -- # nvmftestfini 00:29:50.439 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:50.439 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@121 -- # sync 00:29:50.439 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.4. 00:29:50.439 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.3. 00:29:50.439 avahi-daemon 0.8 exiting. 00:29:50.439 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:50.439 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@124 -- # set +e 00:29:50.439 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:50.439 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:50.439 rmmod nvme_tcp 00:29:50.439 rmmod nvme_fabrics 00:29:50.439 rmmod nvme_keyring 00:29:50.439 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:50.439 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@128 -- # set -e 00:29:50.439 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@129 -- # return 0 00:29:50.439 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@517 -- # '[' -n 94175 ']' 00:29:50.439 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@518 -- # killprocess 94175 00:29:50.439 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@954 -- # '[' -z 94175 ']' 00:29:50.439 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@958 -- # kill -0 94175 00:29:50.439 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@959 -- # uname 00:29:50.439 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:50.439 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94175 00:29:50.439 killing process with pid 94175 00:29:50.439 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:50.439 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:50.439 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94175' 00:29:50.439 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@973 -- # kill 94175 00:29:50.439 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@978 -- # wait 94175 00:29:50.697 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:50.697 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:50.697 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:50.697 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@297 -- # iptr 00:29:50.697 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:50.697 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # iptables-save 00:29:50.697 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:29:50.697 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:50.697 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:29:50.697 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:29:50.697 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:29:50.697 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:29:50.697 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:29:50.697 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:29:50.697 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:29:50.697 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:29:50.697 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:29:50.697 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:29:50.697 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:29:50.955 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:29:50.955 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:50.955 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:50.955 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:29:50.955 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:50.955 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:50.955 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:50.955 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- nvmf/common.sh@300 -- # return 0 00:29:50.955 00:29:50.955 real 0m21.787s 00:29:50.955 user 0m41.961s 00:29:50.955 sys 0m1.642s 00:29:50.955 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:50.955 12:28:46 nvmf_tcp.nvmf_host.nvmf_mdns_discovery -- common/autotest_common.sh@10 -- # set +x 00:29:50.955 ************************************ 00:29:50.955 END TEST nvmf_mdns_discovery 00:29:50.955 ************************************ 00:29:50.955 12:28:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:29:50.955 12:28:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:29:50.955 12:28:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:50.955 12:28:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:50.955 12:28:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.955 ************************************ 00:29:50.955 START TEST nvmf_host_multipath 00:29:50.955 ************************************ 00:29:50.955 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:29:50.955 * Looking for test storage... 00:29:50.955 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:29:50.955 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:50.955 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:29:50.955 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:51.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.214 --rc genhtml_branch_coverage=1 00:29:51.214 --rc genhtml_function_coverage=1 00:29:51.214 --rc genhtml_legend=1 00:29:51.214 --rc geninfo_all_blocks=1 00:29:51.214 --rc geninfo_unexecuted_blocks=1 00:29:51.214 00:29:51.214 ' 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:51.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.214 --rc genhtml_branch_coverage=1 00:29:51.214 --rc genhtml_function_coverage=1 00:29:51.214 --rc genhtml_legend=1 00:29:51.214 --rc geninfo_all_blocks=1 00:29:51.214 --rc geninfo_unexecuted_blocks=1 00:29:51.214 00:29:51.214 ' 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:51.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.214 --rc genhtml_branch_coverage=1 00:29:51.214 --rc genhtml_function_coverage=1 00:29:51.214 --rc genhtml_legend=1 00:29:51.214 --rc geninfo_all_blocks=1 00:29:51.214 --rc geninfo_unexecuted_blocks=1 00:29:51.214 00:29:51.214 ' 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:51.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.214 --rc genhtml_branch_coverage=1 00:29:51.214 --rc genhtml_function_coverage=1 00:29:51.214 --rc genhtml_legend=1 00:29:51.214 --rc geninfo_all_blocks=1 00:29:51.214 --rc geninfo_unexecuted_blocks=1 00:29:51.214 00:29:51.214 ' 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.214 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:51.215 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:29:51.215 Cannot find device "nvmf_init_br" 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:29:51.215 Cannot find device "nvmf_init_br2" 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:29:51.215 Cannot find device "nvmf_tgt_br" 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:29:51.215 Cannot find device "nvmf_tgt_br2" 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:29:51.215 Cannot find device "nvmf_init_br" 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:29:51.215 Cannot find device "nvmf_init_br2" 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:29:51.215 Cannot find device "nvmf_tgt_br" 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:29:51.215 Cannot find device "nvmf_tgt_br2" 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:29:51.215 Cannot find device "nvmf_br" 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:29:51.215 Cannot find device "nvmf_init_if" 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:29:51.215 Cannot find device "nvmf_init_if2" 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:51.215 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:51.215 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:29:51.215 12:28:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:51.215 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:51.215 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:51.215 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:51.215 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:51.215 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:29:51.215 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:29:51.215 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:29:51.215 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:29:51.215 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:29:51.215 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:29:51.215 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:29:51.215 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:29:51.215 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:29:51.215 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:51.215 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:51.215 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:51.215 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:29:51.215 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:29:51.215 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:29:51.473 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:29:51.473 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:51.473 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:51.473 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:51.473 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:29:51.473 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:29:51.473 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:29:51.473 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:51.473 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:29:51.473 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:29:51.473 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:51.473 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:29:51.473 00:29:51.473 --- 10.0.0.3 ping statistics --- 00:29:51.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:51.473 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:29:51.473 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:29:51.473 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:29:51.473 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.030 ms 00:29:51.473 00:29:51.473 --- 10.0.0.4 ping statistics --- 00:29:51.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:51.473 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:29:51.473 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:51.473 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:51.473 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.014 ms 00:29:51.473 00:29:51.473 --- 10.0.0.1 ping statistics --- 00:29:51.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:51.473 rtt min/avg/max/mdev = 0.014/0.014/0.014/0.000 ms 00:29:51.473 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:29:51.473 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:51.473 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.041 ms 00:29:51.473 00:29:51.473 --- 10.0.0.2 ping statistics --- 00:29:51.473 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:51.473 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:29:51.473 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:51.473 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:29:51.473 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:51.473 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:51.473 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:51.473 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:51.473 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:51.473 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:51.473 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:51.473 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:29:51.473 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:51.473 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:51.473 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:29:51.473 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=94906 00:29:51.473 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:29:51.473 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 94906 00:29:51.473 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 94906 ']' 00:29:51.473 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:51.473 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:51.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:51.473 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:51.473 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:51.473 12:28:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:29:51.473 [2024-11-25 12:28:47.195697] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:29:51.473 [2024-11-25 12:28:47.195766] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:51.473 [2024-11-25 12:28:47.335707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:51.731 [2024-11-25 12:28:47.377121] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:51.731 [2024-11-25 12:28:47.377159] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:51.731 [2024-11-25 12:28:47.377166] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:51.731 [2024-11-25 12:28:47.377171] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:51.731 [2024-11-25 12:28:47.377176] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:51.731 [2024-11-25 12:28:47.377983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:51.731 [2024-11-25 12:28:47.377992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.294 12:28:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:52.294 12:28:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:29:52.294 12:28:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:52.294 12:28:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:52.294 12:28:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:29:52.294 12:28:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:52.294 12:28:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=94906 00:29:52.294 12:28:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:52.551 [2024-11-25 12:28:48.276781] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:52.551 12:28:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:29:52.808 Malloc0 00:29:52.809 12:28:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:29:53.066 12:28:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:53.323 12:28:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:29:53.581 [2024-11-25 12:28:49.201928] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:29:53.581 12:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:29:53.581 [2024-11-25 12:28:49.410027] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:29:53.581 12:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=95004 00:29:53.581 12:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:29:53.581 12:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:53.581 12:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 95004 /var/tmp/bdevperf.sock 00:29:53.581 12:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 95004 ']' 00:29:53.581 12:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:53.581 12:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:53.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:53.581 12:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:53.581 12:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:53.581 12:28:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:29:54.513 12:28:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:54.513 12:28:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:29:54.513 12:28:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:29:54.771 12:28:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:29:55.027 Nvme0n1 00:29:55.027 12:28:50 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:29:55.590 Nvme0n1 00:29:55.590 12:28:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:29:55.590 12:28:51 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:29:56.522 12:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:29:56.522 12:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:29:56.522 12:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:29:56.779 12:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:29:56.779 12:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94906 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:29:56.779 12:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95091 00:29:56.779 12:28:52 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:30:03.413 12:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:30:03.413 12:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:30:03.413 12:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:30:03.413 12:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:03.413 Attaching 4 probes... 00:30:03.413 @path[10.0.0.3, 4421]: 24591 00:30:03.413 @path[10.0.0.3, 4421]: 24622 00:30:03.413 @path[10.0.0.3, 4421]: 24146 00:30:03.413 @path[10.0.0.3, 4421]: 24073 00:30:03.413 @path[10.0.0.3, 4421]: 24213 00:30:03.413 12:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:30:03.413 12:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:30:03.413 12:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:30:03.413 12:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:30:03.413 12:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:30:03.413 12:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:30:03.413 12:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95091 00:30:03.413 12:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:03.413 12:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:30:03.413 12:28:58 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:30:03.413 12:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:30:03.413 12:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:30:03.413 12:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94906 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:30:03.413 12:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95228 00:30:03.413 12:28:59 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:30:09.969 12:29:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:30:09.969 12:29:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:30:09.969 12:29:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:30:09.969 12:29:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:09.969 Attaching 4 probes... 00:30:09.969 @path[10.0.0.3, 4420]: 22899 00:30:09.969 @path[10.0.0.3, 4420]: 22758 00:30:09.969 @path[10.0.0.3, 4420]: 22617 00:30:09.969 @path[10.0.0.3, 4420]: 22026 00:30:09.969 @path[10.0.0.3, 4420]: 21907 00:30:09.969 12:29:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:30:09.969 12:29:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:30:09.969 12:29:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:30:09.969 12:29:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:30:09.969 12:29:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:30:09.969 12:29:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:30:09.969 12:29:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95228 00:30:09.969 12:29:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:09.969 12:29:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:30:09.969 12:29:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:30:09.969 12:29:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:30:10.226 12:29:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:30:10.227 12:29:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94906 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:30:10.227 12:29:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95358 00:30:10.227 12:29:05 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:30:16.781 12:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:30:16.782 12:29:11 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:30:16.782 12:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:30:16.782 12:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:16.782 Attaching 4 probes... 00:30:16.782 @path[10.0.0.3, 4421]: 14712 00:30:16.782 @path[10.0.0.3, 4421]: 24086 00:30:16.782 @path[10.0.0.3, 4421]: 24099 00:30:16.782 @path[10.0.0.3, 4421]: 24536 00:30:16.782 @path[10.0.0.3, 4421]: 24856 00:30:16.782 12:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:30:16.782 12:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:30:16.782 12:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:30:16.782 12:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:30:16.782 12:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:30:16.782 12:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:30:16.782 12:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95358 00:30:16.782 12:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:16.782 12:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:30:16.782 12:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:30:16.782 12:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:30:16.782 12:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:30:16.782 12:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95493 00:30:16.782 12:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:30:16.782 12:29:12 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94906 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:30:23.355 12:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:30:23.355 12:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:30:23.355 12:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:30:23.355 12:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:23.355 Attaching 4 probes... 00:30:23.355 00:30:23.355 00:30:23.355 00:30:23.355 00:30:23.355 00:30:23.355 12:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:30:23.355 12:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:30:23.355 12:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:30:23.355 12:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:30:23.355 12:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:30:23.355 12:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:30:23.355 12:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95493 00:30:23.355 12:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:23.355 12:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:30:23.356 12:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:30:23.356 12:29:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:30:23.356 12:29:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:30:23.356 12:29:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95622 00:30:23.356 12:29:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:30:23.356 12:29:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94906 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:30:29.910 12:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:30:29.910 12:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:30:29.910 12:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:30:29.910 12:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:29.910 Attaching 4 probes... 00:30:29.910 @path[10.0.0.3, 4421]: 24062 00:30:29.910 @path[10.0.0.3, 4421]: 24647 00:30:29.910 @path[10.0.0.3, 4421]: 24669 00:30:29.910 @path[10.0.0.3, 4421]: 21526 00:30:29.910 @path[10.0.0.3, 4421]: 24704 00:30:29.910 12:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:30:29.910 12:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:30:29.910 12:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:30:29.910 12:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:30:29.910 12:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:30:29.910 12:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:30:29.910 12:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95622 00:30:29.910 12:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:29.910 12:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:30:29.910 [2024-11-25 12:29:25.598497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.910 [2024-11-25 12:29:25.598548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.910 [2024-11-25 12:29:25.598555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.910 [2024-11-25 12:29:25.598559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.910 [2024-11-25 12:29:25.598563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.910 [2024-11-25 12:29:25.598567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.910 [2024-11-25 12:29:25.598572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.910 [2024-11-25 12:29:25.598576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.910 [2024-11-25 12:29:25.598580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.910 [2024-11-25 12:29:25.598583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.910 [2024-11-25 12:29:25.598587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.910 [2024-11-25 12:29:25.598591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.910 [2024-11-25 12:29:25.598595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.910 [2024-11-25 12:29:25.598599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.910 [2024-11-25 12:29:25.598603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.910 [2024-11-25 12:29:25.598608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.910 [2024-11-25 12:29:25.598611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.910 [2024-11-25 12:29:25.598616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.910 [2024-11-25 12:29:25.598620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.910 [2024-11-25 12:29:25.598624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.910 [2024-11-25 12:29:25.598628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.910 [2024-11-25 12:29:25.598632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.910 [2024-11-25 12:29:25.598635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.910 [2024-11-25 12:29:25.598639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.910 [2024-11-25 12:29:25.598643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.910 [2024-11-25 12:29:25.598647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.910 [2024-11-25 12:29:25.598650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.910 [2024-11-25 12:29:25.598654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.910 [2024-11-25 12:29:25.598657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.910 [2024-11-25 12:29:25.598661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.910 [2024-11-25 12:29:25.598665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.910 [2024-11-25 12:29:25.598668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.910 [2024-11-25 12:29:25.598672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.910 [2024-11-25 12:29:25.598676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.910 [2024-11-25 12:29:25.598681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.910 [2024-11-25 12:29:25.598685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.910 [2024-11-25 12:29:25.598689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.910 [2024-11-25 12:29:25.598693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.910 [2024-11-25 12:29:25.598697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.910 [2024-11-25 12:29:25.598701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.910 [2024-11-25 12:29:25.598705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.910 [2024-11-25 12:29:25.598709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.910 [2024-11-25 12:29:25.598712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.910 [2024-11-25 12:29:25.598716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.910 [2024-11-25 12:29:25.598721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.910 [2024-11-25 12:29:25.598726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.910 [2024-11-25 12:29:25.598730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.910 [2024-11-25 12:29:25.598734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.910 [2024-11-25 12:29:25.598738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.598995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.599003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.599007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.599010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.599014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.599017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.599021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.599024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.599028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.911 [2024-11-25 12:29:25.599031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.912 [2024-11-25 12:29:25.599035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.912 [2024-11-25 12:29:25.599038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.912 [2024-11-25 12:29:25.599042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.912 [2024-11-25 12:29:25.599045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.912 [2024-11-25 12:29:25.599048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da0e90 is same with the state(6) to be set 00:30:29.912 12:29:25 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:30:30.843 12:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:30:30.843 12:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95759 00:30:30.843 12:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:30:30.843 12:29:26 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94906 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:30:37.392 12:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:30:37.392 12:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:30:37.392 12:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:30:37.392 12:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:37.392 Attaching 4 probes... 00:30:37.392 @path[10.0.0.3, 4420]: 21834 00:30:37.392 @path[10.0.0.3, 4420]: 21503 00:30:37.392 @path[10.0.0.3, 4420]: 21429 00:30:37.392 @path[10.0.0.3, 4420]: 20656 00:30:37.392 @path[10.0.0.3, 4420]: 20224 00:30:37.392 12:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:30:37.392 12:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:30:37.392 12:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:30:37.392 12:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:30:37.392 12:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:30:37.392 12:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:30:37.392 12:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95759 00:30:37.392 12:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:37.392 12:29:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:30:37.392 [2024-11-25 12:29:33.040605] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:30:37.392 12:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:30:37.649 12:29:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:30:44.230 12:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:30:44.230 12:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95951 00:30:44.230 12:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:30:44.230 12:29:39 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94906 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:30:49.487 12:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:30:49.487 12:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:30:49.745 12:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:30:49.745 12:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:49.745 Attaching 4 probes... 00:30:49.745 @path[10.0.0.3, 4421]: 18777 00:30:49.745 @path[10.0.0.3, 4421]: 19015 00:30:49.745 @path[10.0.0.3, 4421]: 20054 00:30:49.745 @path[10.0.0.3, 4421]: 23594 00:30:49.745 @path[10.0.0.3, 4421]: 24158 00:30:49.745 12:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:30:49.745 12:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:30:49.745 12:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:30:49.745 12:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:30:49.745 12:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:30:49.745 12:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:30:49.745 12:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95951 00:30:49.745 12:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:49.745 12:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 95004 00:30:49.745 12:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 95004 ']' 00:30:49.745 12:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 95004 00:30:49.745 12:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:30:49.745 12:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:49.745 12:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 95004 00:30:49.745 killing process with pid 95004 00:30:49.745 12:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:30:49.745 12:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:30:49.745 12:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 95004' 00:30:49.745 12:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 95004 00:30:49.745 12:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 95004 00:30:49.745 { 00:30:49.745 "results": [ 00:30:49.745 { 00:30:49.745 "job": "Nvme0n1", 00:30:49.745 "core_mask": "0x4", 00:30:49.745 "workload": "verify", 00:30:49.745 "status": "terminated", 00:30:49.745 "verify_range": { 00:30:49.745 "start": 0, 00:30:49.745 "length": 16384 00:30:49.745 }, 00:30:49.745 "queue_depth": 128, 00:30:49.745 "io_size": 4096, 00:30:49.745 "runtime": 54.291638, 00:30:49.745 "iops": 9770.583823608344, 00:30:49.745 "mibps": 38.16634306097009, 00:30:49.745 "io_failed": 0, 00:30:49.745 "io_timeout": 0, 00:30:49.745 "avg_latency_us": 13077.807986498829, 00:30:49.745 "min_latency_us": 304.04923076923075, 00:30:49.745 "max_latency_us": 7020619.618461538 00:30:49.745 } 00:30:49.745 ], 00:30:49.745 "core_count": 1 00:30:49.745 } 00:30:50.011 12:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 95004 00:30:50.011 12:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:30:50.011 [2024-11-25 12:28:49.468477] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:30:50.011 [2024-11-25 12:28:49.468558] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95004 ] 00:30:50.011 [2024-11-25 12:28:49.608899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:50.011 [2024-11-25 12:28:49.644797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:50.011 Running I/O for 90 seconds... 00:30:50.011 10872.00 IOPS, 42.47 MiB/s [2024-11-25T12:29:45.887Z] 11668.00 IOPS, 45.58 MiB/s [2024-11-25T12:29:45.887Z] 11988.33 IOPS, 46.83 MiB/s [2024-11-25T12:29:45.887Z] 12037.00 IOPS, 47.02 MiB/s [2024-11-25T12:29:45.887Z] 12051.40 IOPS, 47.08 MiB/s [2024-11-25T12:29:45.887Z] 12055.83 IOPS, 47.09 MiB/s [2024-11-25T12:29:45.887Z] 12048.71 IOPS, 47.07 MiB/s [2024-11-25T12:29:45.887Z] [2024-11-25 12:28:59.220684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:113720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.011 [2024-11-25 12:28:59.220732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:50.011 [2024-11-25 12:28:59.220773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:113728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.011 [2024-11-25 12:28:59.220784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:50.011 [2024-11-25 12:28:59.220796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:113736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.011 [2024-11-25 12:28:59.220804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:50.011 [2024-11-25 12:28:59.220816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:113744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.011 [2024-11-25 12:28:59.220824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:50.011 [2024-11-25 12:28:59.220836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:113752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.011 [2024-11-25 12:28:59.220843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:50.011 [2024-11-25 12:28:59.220855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:113760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.011 [2024-11-25 12:28:59.220863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:50.011 [2024-11-25 12:28:59.220875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:113768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.011 [2024-11-25 12:28:59.220883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.011 [2024-11-25 12:28:59.220895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:113776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.011 [2024-11-25 12:28:59.220902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:50.011 [2024-11-25 12:28:59.221218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:113784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.011 [2024-11-25 12:28:59.221229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:50.011 [2024-11-25 12:28:59.221243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:113792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.011 [2024-11-25 12:28:59.221252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:50.011 [2024-11-25 12:28:59.221281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:113800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.011 [2024-11-25 12:28:59.221288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:50.011 [2024-11-25 12:28:59.221301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:113808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.011 [2024-11-25 12:28:59.221309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:50.011 [2024-11-25 12:28:59.221322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:113816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.011 [2024-11-25 12:28:59.221329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:50.011 [2024-11-25 12:28:59.221341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:113824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.011 [2024-11-25 12:28:59.221350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:50.011 [2024-11-25 12:28:59.221362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:113832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.011 [2024-11-25 12:28:59.221370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:50.011 [2024-11-25 12:28:59.221383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:113840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.011 [2024-11-25 12:28:59.221391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:50.011 [2024-11-25 12:28:59.221504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:113848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.011 [2024-11-25 12:28:59.221515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:50.011 [2024-11-25 12:28:59.221530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:113856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.011 [2024-11-25 12:28:59.221537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:50.011 [2024-11-25 12:28:59.221550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:113864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.011 [2024-11-25 12:28:59.221558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:50.011 [2024-11-25 12:28:59.221571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:113872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.011 [2024-11-25 12:28:59.221579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:50.011 [2024-11-25 12:28:59.221592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:113880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.012 [2024-11-25 12:28:59.221599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:50.012 [2024-11-25 12:28:59.221611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:113888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.012 [2024-11-25 12:28:59.221618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:50.012 [2024-11-25 12:28:59.221636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:113896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.012 [2024-11-25 12:28:59.221644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:50.012 [2024-11-25 12:28:59.221656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:113904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.012 [2024-11-25 12:28:59.221664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:50.012 [2024-11-25 12:28:59.221676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:113912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.012 [2024-11-25 12:28:59.221683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:50.012 [2024-11-25 12:28:59.221696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:113920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.012 [2024-11-25 12:28:59.221703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:50.012 [2024-11-25 12:28:59.221716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:113928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.012 [2024-11-25 12:28:59.221724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:50.012 [2024-11-25 12:28:59.221738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:113936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.012 [2024-11-25 12:28:59.221745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:50.012 [2024-11-25 12:28:59.221768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:113944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.012 [2024-11-25 12:28:59.221775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:50.012 [2024-11-25 12:28:59.221788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:113952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.012 [2024-11-25 12:28:59.221795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:50.012 [2024-11-25 12:28:59.221807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:113960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.012 [2024-11-25 12:28:59.221815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:50.012 [2024-11-25 12:28:59.221828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:113968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.012 [2024-11-25 12:28:59.221835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:50.012 [2024-11-25 12:28:59.222816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:113976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.012 [2024-11-25 12:28:59.222830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:50.012 [2024-11-25 12:28:59.222844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:113984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.012 [2024-11-25 12:28:59.222852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:50.012 [2024-11-25 12:28:59.222864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:113992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.012 [2024-11-25 12:28:59.222878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:50.012 [2024-11-25 12:28:59.222891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:114000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.012 [2024-11-25 12:28:59.222898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:50.012 [2024-11-25 12:28:59.222911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:113080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.012 [2024-11-25 12:28:59.222918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:50.012 [2024-11-25 12:28:59.222931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:113088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.012 [2024-11-25 12:28:59.222938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:50.012 [2024-11-25 12:28:59.222951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:113096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.012 [2024-11-25 12:28:59.222958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.012 [2024-11-25 12:28:59.222971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:113104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.012 [2024-11-25 12:28:59.222978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:50.012 [2024-11-25 12:28:59.222990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:113112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.012 [2024-11-25 12:28:59.222998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:50.012 [2024-11-25 12:28:59.223010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:113120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.012 [2024-11-25 12:28:59.223017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:50.012 [2024-11-25 12:28:59.223030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:113128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.012 [2024-11-25 12:28:59.223038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:50.012 [2024-11-25 12:28:59.223050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:113136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.012 [2024-11-25 12:28:59.223062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:50.012 [2024-11-25 12:28:59.223075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:113144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.012 [2024-11-25 12:28:59.223082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:50.012 [2024-11-25 12:28:59.223095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:113152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.012 [2024-11-25 12:28:59.223102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:50.012 [2024-11-25 12:28:59.223115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:113160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.012 [2024-11-25 12:28:59.223125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:50.012 [2024-11-25 12:28:59.223137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:113168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.012 [2024-11-25 12:28:59.223145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:50.012 [2024-11-25 12:28:59.223157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:113176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.012 [2024-11-25 12:28:59.223165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:50.012 [2024-11-25 12:28:59.223178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:113184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.012 [2024-11-25 12:28:59.223185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:50.012 [2024-11-25 12:28:59.223197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:113192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.012 [2024-11-25 12:28:59.223205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:50.012 [2024-11-25 12:28:59.223217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:113200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.012 [2024-11-25 12:28:59.223224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:50.012 [2024-11-25 12:28:59.223236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:113208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.012 [2024-11-25 12:28:59.223243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:50.012 [2024-11-25 12:28:59.223256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:113216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.012 [2024-11-25 12:28:59.223263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:50.012 [2024-11-25 12:28:59.223276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.012 [2024-11-25 12:28:59.223283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:50.012 [2024-11-25 12:28:59.223295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:113232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.012 [2024-11-25 12:28:59.223302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:50.012 [2024-11-25 12:28:59.223315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:113240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.012 [2024-11-25 12:28:59.223322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:50.012 [2024-11-25 12:28:59.223335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:113248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.012 [2024-11-25 12:28:59.223342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:50.012 [2024-11-25 12:28:59.223355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:113256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.012 [2024-11-25 12:28:59.223365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:50.013 [2024-11-25 12:28:59.223378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:113264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.013 [2024-11-25 12:28:59.223386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:50.013 [2024-11-25 12:28:59.223399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:113272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.013 [2024-11-25 12:28:59.223406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:50.013 [2024-11-25 12:28:59.223419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:114008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.013 [2024-11-25 12:28:59.223426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:50.013 [2024-11-25 12:28:59.223438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.013 [2024-11-25 12:28:59.223446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:50.013 [2024-11-25 12:28:59.223458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:114024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.013 [2024-11-25 12:28:59.223465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:50.013 [2024-11-25 12:28:59.223478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:114032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.013 [2024-11-25 12:28:59.223485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:50.013 [2024-11-25 12:28:59.223498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.013 [2024-11-25 12:28:59.223505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:50.013 [2024-11-25 12:28:59.223782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:113288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.013 [2024-11-25 12:28:59.223794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:50.013 [2024-11-25 12:28:59.223808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:113296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.013 [2024-11-25 12:28:59.223816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:50.013 [2024-11-25 12:28:59.223828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:113304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.013 [2024-11-25 12:28:59.223836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:50.013 [2024-11-25 12:28:59.223849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:113312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.013 [2024-11-25 12:28:59.223856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:50.013 [2024-11-25 12:28:59.223868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.013 [2024-11-25 12:28:59.223875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.013 [2024-11-25 12:28:59.223892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.013 [2024-11-25 12:28:59.223900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:50.013 [2024-11-25 12:28:59.224418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:114040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.013 [2024-11-25 12:28:59.224428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:50.013 [2024-11-25 12:28:59.224441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:113336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.013 [2024-11-25 12:28:59.224449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:50.013 [2024-11-25 12:28:59.224461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:113344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.013 [2024-11-25 12:28:59.224469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:50.013 [2024-11-25 12:28:59.224482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:113352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.013 [2024-11-25 12:28:59.224490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:50.013 [2024-11-25 12:28:59.224502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:113360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.013 [2024-11-25 12:28:59.224509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:50.013 [2024-11-25 12:28:59.224522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:113368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.013 [2024-11-25 12:28:59.224529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:50.013 [2024-11-25 12:28:59.224541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:113376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.013 [2024-11-25 12:28:59.224549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:50.013 [2024-11-25 12:28:59.224561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:113384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.013 [2024-11-25 12:28:59.224573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:50.013 [2024-11-25 12:28:59.224585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:113392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.013 [2024-11-25 12:28:59.224593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:50.013 [2024-11-25 12:28:59.224605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:113400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.013 [2024-11-25 12:28:59.224612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:50.013 [2024-11-25 12:28:59.224625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.013 [2024-11-25 12:28:59.224632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:50.013 [2024-11-25 12:28:59.224652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:113416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.013 [2024-11-25 12:28:59.224660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:50.013 [2024-11-25 12:28:59.224672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:113424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.013 [2024-11-25 12:28:59.224679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:50.013 [2024-11-25 12:28:59.224692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:113432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.013 [2024-11-25 12:28:59.224699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:50.013 [2024-11-25 12:28:59.224712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:113440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.013 [2024-11-25 12:28:59.224719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:50.013 [2024-11-25 12:28:59.224731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:113448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.013 [2024-11-25 12:28:59.224739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:50.013 [2024-11-25 12:28:59.224760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.013 [2024-11-25 12:28:59.224768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:50.013 [2024-11-25 12:28:59.224780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:113464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.013 [2024-11-25 12:28:59.224787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:50.013 [2024-11-25 12:28:59.224800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:113472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.013 [2024-11-25 12:28:59.224807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:50.013 [2024-11-25 12:28:59.224820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:113480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.013 [2024-11-25 12:28:59.224827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:50.013 [2024-11-25 12:28:59.224840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:113488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.013 [2024-11-25 12:28:59.224847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:50.013 [2024-11-25 12:28:59.224859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:113496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.013 [2024-11-25 12:28:59.224866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:50.013 [2024-11-25 12:28:59.224879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:113504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.013 [2024-11-25 12:28:59.224886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:50.013 [2024-11-25 12:28:59.224898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:113512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.013 [2024-11-25 12:28:59.224911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:50.013 [2024-11-25 12:28:59.224924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.013 [2024-11-25 12:28:59.224932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:50.013 [2024-11-25 12:28:59.224945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-11-25 12:28:59.224952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:50.014 [2024-11-25 12:28:59.224964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:113536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-11-25 12:28:59.224971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:50.014 [2024-11-25 12:28:59.224984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:113544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-11-25 12:28:59.224991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:50.014 [2024-11-25 12:28:59.225003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:113552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-11-25 12:28:59.225011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:50.014 [2024-11-25 12:28:59.225023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:113560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-11-25 12:28:59.225031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.014 [2024-11-25 12:28:59.225043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:113568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-11-25 12:28:59.225050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.014 [2024-11-25 12:28:59.225063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:113576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-11-25 12:28:59.225070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:50.014 [2024-11-25 12:28:59.225083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:113584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-11-25 12:28:59.225090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:50.014 [2024-11-25 12:28:59.225103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:113592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-11-25 12:28:59.225110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:50.014 [2024-11-25 12:28:59.225362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:113600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-11-25 12:28:59.225373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:50.014 [2024-11-25 12:28:59.225386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:113608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-11-25 12:28:59.225398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:50.014 [2024-11-25 12:28:59.225411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:113616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-11-25 12:28:59.225419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:50.014 [2024-11-25 12:28:59.225432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:113624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-11-25 12:28:59.225439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:50.014 [2024-11-25 12:28:59.225451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:113632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-11-25 12:28:59.225458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:50.014 [2024-11-25 12:28:59.225471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:113640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-11-25 12:28:59.225480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:50.014 [2024-11-25 12:28:59.225492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:113648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-11-25 12:28:59.225500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:50.014 [2024-11-25 12:28:59.225512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:113656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-11-25 12:28:59.225519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:50.014 [2024-11-25 12:28:59.225532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:113664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-11-25 12:28:59.225540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:50.014 [2024-11-25 12:28:59.225553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:113672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-11-25 12:28:59.225560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:50.014 [2024-11-25 12:28:59.225573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:113680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-11-25 12:28:59.225580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:50.014 [2024-11-25 12:28:59.225592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:113688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-11-25 12:28:59.225600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:50.014 [2024-11-25 12:28:59.225612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:113696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-11-25 12:28:59.225619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:50.014 [2024-11-25 12:28:59.225632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:113704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-11-25 12:28:59.225642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:50.014 [2024-11-25 12:28:59.225656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:113712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-11-25 12:28:59.225663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:50.014 12061.88 IOPS, 47.12 MiB/s [2024-11-25T12:29:45.890Z] 11992.78 IOPS, 46.85 MiB/s [2024-11-25T12:29:45.890Z] 11942.90 IOPS, 46.65 MiB/s [2024-11-25T12:29:45.890Z] 11882.82 IOPS, 46.42 MiB/s [2024-11-25T12:29:45.890Z] 11825.25 IOPS, 46.19 MiB/s [2024-11-25T12:29:45.890Z] 11757.77 IOPS, 45.93 MiB/s [2024-11-25T12:29:45.890Z] 11700.86 IOPS, 45.71 MiB/s [2024-11-25T12:29:45.890Z] [2024-11-25 12:29:05.665269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:34328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-11-25 12:29:05.665311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:50.014 [2024-11-25 12:29:05.665345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:34336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-11-25 12:29:05.665354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:50.014 [2024-11-25 12:29:05.665367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-11-25 12:29:05.665375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:50.014 [2024-11-25 12:29:05.665387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-11-25 12:29:05.665394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:50.014 [2024-11-25 12:29:05.665407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-11-25 12:29:05.665414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:50.014 [2024-11-25 12:29:05.665426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:34368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-11-25 12:29:05.665434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:50.014 [2024-11-25 12:29:05.665446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:34376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-11-25 12:29:05.665453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:50.014 [2024-11-25 12:29:05.665466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:34384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-11-25 12:29:05.665473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:50.014 [2024-11-25 12:29:05.665810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-11-25 12:29:05.665824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:50.014 [2024-11-25 12:29:05.665838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:34400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-11-25 12:29:05.665846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:50.014 [2024-11-25 12:29:05.665875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:34408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-11-25 12:29:05.665883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:50.014 [2024-11-25 12:29:05.665898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:34416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-11-25 12:29:05.665905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:50.014 [2024-11-25 12:29:05.665918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-11-25 12:29:05.665925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:50.014 [2024-11-25 12:29:05.665938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:34432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.014 [2024-11-25 12:29:05.665945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:50.015 [2024-11-25 12:29:05.665958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:34440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.015 [2024-11-25 12:29:05.665965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:50.015 [2024-11-25 12:29:05.665978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:34448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.015 [2024-11-25 12:29:05.665985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:50.015 [2024-11-25 12:29:05.665998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:34456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.015 [2024-11-25 12:29:05.666006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:50.015 [2024-11-25 12:29:05.666021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:34464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.015 [2024-11-25 12:29:05.666028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.015 [2024-11-25 12:29:05.666042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:34472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.015 [2024-11-25 12:29:05.666049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:50.015 [2024-11-25 12:29:05.666062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:34480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.015 [2024-11-25 12:29:05.666070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:50.015 [2024-11-25 12:29:05.666083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:34488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.015 [2024-11-25 12:29:05.666091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:50.015 [2024-11-25 12:29:05.666104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:34496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.015 [2024-11-25 12:29:05.666119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:50.015 [2024-11-25 12:29:05.666131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:34504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.015 [2024-11-25 12:29:05.666144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:50.015 [2024-11-25 12:29:05.666157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:34512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.015 [2024-11-25 12:29:05.666165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:50.015 [2024-11-25 12:29:05.666178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:34520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.015 [2024-11-25 12:29:05.666185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:50.015 [2024-11-25 12:29:05.666198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:34528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.015 [2024-11-25 12:29:05.666206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:50.015 [2024-11-25 12:29:05.666219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:34536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.015 [2024-11-25 12:29:05.666227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:50.015 [2024-11-25 12:29:05.666240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:34544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.015 [2024-11-25 12:29:05.666248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:50.015 [2024-11-25 12:29:05.666261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:34552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.015 [2024-11-25 12:29:05.666268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:50.015 [2024-11-25 12:29:05.666281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:34560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.015 [2024-11-25 12:29:05.666288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:50.015 [2024-11-25 12:29:05.666300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:34568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.015 [2024-11-25 12:29:05.666307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:50.015 [2024-11-25 12:29:05.666321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:34576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.015 [2024-11-25 12:29:05.666328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:50.015 [2024-11-25 12:29:05.666340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:34584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.015 [2024-11-25 12:29:05.666347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:50.015 [2024-11-25 12:29:05.666361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.015 [2024-11-25 12:29:05.666368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:50.015 [2024-11-25 12:29:05.666381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.015 [2024-11-25 12:29:05.666392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:50.015 [2024-11-25 12:29:05.666406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:34608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.015 [2024-11-25 12:29:05.666413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:50.015 [2024-11-25 12:29:05.666426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:34616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.015 [2024-11-25 12:29:05.666433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:50.015 [2024-11-25 12:29:05.666446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:34624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.015 [2024-11-25 12:29:05.666453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:50.015 [2024-11-25 12:29:05.666466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:34632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.015 [2024-11-25 12:29:05.666474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:50.015 [2024-11-25 12:29:05.666486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:34640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.015 [2024-11-25 12:29:05.666494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:50.015 [2024-11-25 12:29:05.666506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:34648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.015 [2024-11-25 12:29:05.666514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:50.015 [2024-11-25 12:29:05.666526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:34656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.015 [2024-11-25 12:29:05.666534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:50.015 [2024-11-25 12:29:05.666547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:34664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.015 [2024-11-25 12:29:05.666554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:50.015 [2024-11-25 12:29:05.666567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:34672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.015 [2024-11-25 12:29:05.666574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:50.015 [2024-11-25 12:29:05.666587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:34680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.015 [2024-11-25 12:29:05.666595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:50.015 [2024-11-25 12:29:05.666607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:34688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.015 [2024-11-25 12:29:05.666615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:50.015 [2024-11-25 12:29:05.666628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.015 [2024-11-25 12:29:05.666635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:50.015 [2024-11-25 12:29:05.666653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:34704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.015 [2024-11-25 12:29:05.666660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:50.016 [2024-11-25 12:29:05.666673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:34712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.016 [2024-11-25 12:29:05.666681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:50.016 [2024-11-25 12:29:05.666693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.016 [2024-11-25 12:29:05.666701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.016 [2024-11-25 12:29:05.666714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:34728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.016 [2024-11-25 12:29:05.666721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:50.016 [2024-11-25 12:29:05.666734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:34736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.016 [2024-11-25 12:29:05.666741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:50.016 [2024-11-25 12:29:05.666763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:34744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.016 [2024-11-25 12:29:05.666771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:50.016 [2024-11-25 12:29:05.666783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:34752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.016 [2024-11-25 12:29:05.666790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:50.016 [2024-11-25 12:29:05.666804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:34760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.016 [2024-11-25 12:29:05.666811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:50.016 [2024-11-25 12:29:05.666824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:34768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.016 [2024-11-25 12:29:05.666831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:50.016 [2024-11-25 12:29:05.666844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:34776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.016 [2024-11-25 12:29:05.666851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:50.016 [2024-11-25 12:29:05.666865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:34784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.016 [2024-11-25 12:29:05.666872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:50.016 [2024-11-25 12:29:05.666886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:34792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.016 [2024-11-25 12:29:05.666894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:50.016 [2024-11-25 12:29:05.666909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:34800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.016 [2024-11-25 12:29:05.666917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:50.016 [2024-11-25 12:29:05.666930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:34808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.016 [2024-11-25 12:29:05.666937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:50.016 [2024-11-25 12:29:05.666950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:34816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.016 [2024-11-25 12:29:05.666957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:50.016 [2024-11-25 12:29:05.666970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:34824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.016 [2024-11-25 12:29:05.666977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:50.016 [2024-11-25 12:29:05.666990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:34832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.016 [2024-11-25 12:29:05.666998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:50.016 [2024-11-25 12:29:05.667011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:34840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.016 [2024-11-25 12:29:05.667018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:50.016 [2024-11-25 12:29:05.667031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:34848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.016 [2024-11-25 12:29:05.667039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:50.016 [2024-11-25 12:29:05.667052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:34856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.016 [2024-11-25 12:29:05.667060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:50.016 [2024-11-25 12:29:05.667073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:34864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.016 [2024-11-25 12:29:05.667080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:50.016 [2024-11-25 12:29:05.667093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:34872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.016 [2024-11-25 12:29:05.667100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:50.016 [2024-11-25 12:29:05.667113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:34880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.016 [2024-11-25 12:29:05.667121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:50.016 [2024-11-25 12:29:05.667134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:34888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.016 [2024-11-25 12:29:05.667142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:50.016 [2024-11-25 12:29:05.667155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.016 [2024-11-25 12:29:05.667166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:50.016 [2024-11-25 12:29:05.667313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:34904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.016 [2024-11-25 12:29:05.667323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:50.016 [2024-11-25 12:29:05.667340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:34912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.016 [2024-11-25 12:29:05.667347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:50.016 [2024-11-25 12:29:05.667364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:34920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.016 [2024-11-25 12:29:05.667371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:50.016 [2024-11-25 12:29:05.667387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:34928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.016 [2024-11-25 12:29:05.667395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:50.016 [2024-11-25 12:29:05.667412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:34936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.016 [2024-11-25 12:29:05.667419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:50.016 [2024-11-25 12:29:05.667435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:34944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.016 [2024-11-25 12:29:05.667442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:50.016 [2024-11-25 12:29:05.667458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:34952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.016 [2024-11-25 12:29:05.667467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:50.016 [2024-11-25 12:29:05.667483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.016 [2024-11-25 12:29:05.667494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:50.016 [2024-11-25 12:29:05.667511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:34968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.016 [2024-11-25 12:29:05.667519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:50.016 [2024-11-25 12:29:05.667535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:34976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.016 [2024-11-25 12:29:05.667542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.016 [2024-11-25 12:29:05.667559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:34984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.016 [2024-11-25 12:29:05.667566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:50.016 [2024-11-25 12:29:05.667583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:34992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.016 [2024-11-25 12:29:05.667595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:50.016 [2024-11-25 12:29:05.667611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:35000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.016 [2024-11-25 12:29:05.667619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:50.016 [2024-11-25 12:29:05.667636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:35008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.016 [2024-11-25 12:29:05.667643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:50.017 [2024-11-25 12:29:05.667659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:35016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.017 [2024-11-25 12:29:05.667666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:50.017 [2024-11-25 12:29:05.667683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:35024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.017 [2024-11-25 12:29:05.667690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:50.017 [2024-11-25 12:29:05.667706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:35032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.017 [2024-11-25 12:29:05.667714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:50.017 [2024-11-25 12:29:05.667730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:35040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.017 [2024-11-25 12:29:05.667737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:50.017 [2024-11-25 12:29:05.667763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:35048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.017 [2024-11-25 12:29:05.667771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:50.017 [2024-11-25 12:29:05.667787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:35056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.017 [2024-11-25 12:29:05.667794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:50.017 [2024-11-25 12:29:05.667810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.017 [2024-11-25 12:29:05.667817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:50.017 [2024-11-25 12:29:05.667834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.017 [2024-11-25 12:29:05.667841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:50.017 [2024-11-25 12:29:05.667858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:35080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.017 [2024-11-25 12:29:05.667865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:50.017 [2024-11-25 12:29:05.667881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.017 [2024-11-25 12:29:05.667890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:50.017 [2024-11-25 12:29:05.667910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:35096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.017 [2024-11-25 12:29:05.667918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:50.017 [2024-11-25 12:29:05.667934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.017 [2024-11-25 12:29:05.667942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:50.017 [2024-11-25 12:29:05.667958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:35112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.017 [2024-11-25 12:29:05.667965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:50.017 [2024-11-25 12:29:05.667981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:35120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.017 [2024-11-25 12:29:05.667989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:50.017 [2024-11-25 12:29:05.668005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:35128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.017 [2024-11-25 12:29:05.668012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:50.017 [2024-11-25 12:29:05.668028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:35136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.017 [2024-11-25 12:29:05.668036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:50.017 [2024-11-25 12:29:05.668052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:35144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.017 [2024-11-25 12:29:05.668059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:50.017 [2024-11-25 12:29:05.668075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.017 [2024-11-25 12:29:05.668083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:50.017 [2024-11-25 12:29:05.668099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:35160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.017 [2024-11-25 12:29:05.668107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:50.017 [2024-11-25 12:29:05.668123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:35168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.017 [2024-11-25 12:29:05.668130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:50.017 [2024-11-25 12:29:05.668146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:35176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.017 [2024-11-25 12:29:05.668154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:50.017 [2024-11-25 12:29:05.668170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:35184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.017 [2024-11-25 12:29:05.668178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:50.017 [2024-11-25 12:29:05.668197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.017 [2024-11-25 12:29:05.668205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:50.017 [2024-11-25 12:29:05.668221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.017 [2024-11-25 12:29:05.668228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:50.017 [2024-11-25 12:29:05.668244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.017 [2024-11-25 12:29:05.668252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:50.017 [2024-11-25 12:29:05.668268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:35216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.017 [2024-11-25 12:29:05.668276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:50.017 [2024-11-25 12:29:05.668293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:35224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.017 [2024-11-25 12:29:05.668300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.017 [2024-11-25 12:29:05.668316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:35232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.017 [2024-11-25 12:29:05.668324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.017 [2024-11-25 12:29:05.668340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:35240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.017 [2024-11-25 12:29:05.668347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:50.017 [2024-11-25 12:29:05.668363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:35248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.017 [2024-11-25 12:29:05.668371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:50.017 [2024-11-25 12:29:05.668387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:35256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.017 [2024-11-25 12:29:05.668394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:50.017 [2024-11-25 12:29:05.668410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.017 [2024-11-25 12:29:05.668418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:50.017 [2024-11-25 12:29:05.668434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:35272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.017 [2024-11-25 12:29:05.668441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:50.017 [2024-11-25 12:29:05.668457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.017 [2024-11-25 12:29:05.668465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:50.017 [2024-11-25 12:29:05.668481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:35288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.017 [2024-11-25 12:29:05.668491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:50.017 [2024-11-25 12:29:05.668507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:35296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.017 [2024-11-25 12:29:05.668515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:50.017 [2024-11-25 12:29:05.668532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:35304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.017 [2024-11-25 12:29:05.668539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:50.017 [2024-11-25 12:29:05.668556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.017 [2024-11-25 12:29:05.668566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:50.017 [2024-11-25 12:29:05.668582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.017 [2024-11-25 12:29:05.668590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:50.017 [2024-11-25 12:29:05.668607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:35328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.018 [2024-11-25 12:29:05.668614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:50.018 [2024-11-25 12:29:05.668630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:35336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.018 [2024-11-25 12:29:05.668638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:50.018 [2024-11-25 12:29:05.668654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:35344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.018 [2024-11-25 12:29:05.668663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:50.018 11208.73 IOPS, 43.78 MiB/s [2024-11-25T12:29:45.894Z] 10965.75 IOPS, 42.83 MiB/s [2024-11-25T12:29:45.894Z] 11028.35 IOPS, 43.08 MiB/s [2024-11-25T12:29:45.894Z] 11086.50 IOPS, 43.31 MiB/s [2024-11-25T12:29:45.894Z] 11147.53 IOPS, 43.55 MiB/s [2024-11-25T12:29:45.894Z] 11213.30 IOPS, 43.80 MiB/s [2024-11-25T12:29:45.894Z] 11257.38 IOPS, 43.97 MiB/s [2024-11-25T12:29:45.894Z] [2024-11-25 12:29:12.503545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.018 [2024-11-25 12:29:12.503593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:50.018 [2024-11-25 12:29:12.503627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.018 [2024-11-25 12:29:12.503636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:50.018 [2024-11-25 12:29:12.504006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.018 [2024-11-25 12:29:12.504021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:50.018 [2024-11-25 12:29:12.504035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.018 [2024-11-25 12:29:12.504043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:50.018 [2024-11-25 12:29:12.504072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.018 [2024-11-25 12:29:12.504080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:50.018 [2024-11-25 12:29:12.504093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.018 [2024-11-25 12:29:12.504100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:50.018 [2024-11-25 12:29:12.504113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.018 [2024-11-25 12:29:12.504120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:50.018 [2024-11-25 12:29:12.504133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.018 [2024-11-25 12:29:12.504140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:50.018 [2024-11-25 12:29:12.504152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.018 [2024-11-25 12:29:12.504159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:50.018 [2024-11-25 12:29:12.504172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.018 [2024-11-25 12:29:12.504179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:50.018 [2024-11-25 12:29:12.504191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.018 [2024-11-25 12:29:12.504198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:50.018 [2024-11-25 12:29:12.504211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.018 [2024-11-25 12:29:12.504218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:50.018 [2024-11-25 12:29:12.504230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.018 [2024-11-25 12:29:12.504237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:50.018 [2024-11-25 12:29:12.504250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.018 [2024-11-25 12:29:12.504257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:50.018 [2024-11-25 12:29:12.504270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.018 [2024-11-25 12:29:12.504277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:50.018 [2024-11-25 12:29:12.504289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.018 [2024-11-25 12:29:12.504297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:50.018 [2024-11-25 12:29:12.504316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.018 [2024-11-25 12:29:12.504324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:50.018 [2024-11-25 12:29:12.504336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.018 [2024-11-25 12:29:12.504343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:50.018 [2024-11-25 12:29:12.504357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.018 [2024-11-25 12:29:12.504365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:50.018 [2024-11-25 12:29:12.504378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.018 [2024-11-25 12:29:12.504385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:50.018 [2024-11-25 12:29:12.504398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.018 [2024-11-25 12:29:12.504405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:50.018 [2024-11-25 12:29:12.504418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.018 [2024-11-25 12:29:12.504426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:50.018 [2024-11-25 12:29:12.504438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.018 [2024-11-25 12:29:12.504446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:50.018 [2024-11-25 12:29:12.504459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.018 [2024-11-25 12:29:12.504466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:50.018 [2024-11-25 12:29:12.504479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.018 [2024-11-25 12:29:12.504486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:50.018 [2024-11-25 12:29:12.504499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.018 [2024-11-25 12:29:12.504506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:50.018 [2024-11-25 12:29:12.504519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.018 [2024-11-25 12:29:12.504527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:50.018 [2024-11-25 12:29:12.504539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.018 [2024-11-25 12:29:12.504546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:50.018 [2024-11-25 12:29:12.504559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.018 [2024-11-25 12:29:12.504570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:50.018 [2024-11-25 12:29:12.504582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.018 [2024-11-25 12:29:12.504591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:50.018 [2024-11-25 12:29:12.504603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.018 [2024-11-25 12:29:12.504611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:50.018 [2024-11-25 12:29:12.504623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.018 [2024-11-25 12:29:12.504631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:50.018 [2024-11-25 12:29:12.504644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.018 [2024-11-25 12:29:12.504651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:50.018 [2024-11-25 12:29:12.504664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.018 [2024-11-25 12:29:12.504671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:50.018 [2024-11-25 12:29:12.504684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.019 [2024-11-25 12:29:12.504692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:50.019 [2024-11-25 12:29:12.504705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.019 [2024-11-25 12:29:12.504713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:50.019 [2024-11-25 12:29:12.504725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.019 [2024-11-25 12:29:12.504733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:50.019 [2024-11-25 12:29:12.504746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.019 [2024-11-25 12:29:12.504764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:50.019 [2024-11-25 12:29:12.504776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.019 [2024-11-25 12:29:12.504784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:50.019 [2024-11-25 12:29:12.504797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.019 [2024-11-25 12:29:12.504804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:50.019 [2024-11-25 12:29:12.504817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.019 [2024-11-25 12:29:12.504828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:50.019 [2024-11-25 12:29:12.504841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.019 [2024-11-25 12:29:12.504849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:50.019 [2024-11-25 12:29:12.504861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.019 [2024-11-25 12:29:12.504869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:50.019 [2024-11-25 12:29:12.504882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.019 [2024-11-25 12:29:12.504890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:50.019 [2024-11-25 12:29:12.504902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.019 [2024-11-25 12:29:12.504910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:50.019 [2024-11-25 12:29:12.504923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.019 [2024-11-25 12:29:12.504930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:50.019 [2024-11-25 12:29:12.504942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.019 [2024-11-25 12:29:12.504950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:50.019 [2024-11-25 12:29:12.504963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.019 [2024-11-25 12:29:12.504970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:50.019 [2024-11-25 12:29:12.504984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.019 [2024-11-25 12:29:12.504991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:50.019 [2024-11-25 12:29:12.505004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:79448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.019 [2024-11-25 12:29:12.505011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:50.019 [2024-11-25 12:29:12.505023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.019 [2024-11-25 12:29:12.505031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:50.019 [2024-11-25 12:29:12.505044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.019 [2024-11-25 12:29:12.505051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:50.019 [2024-11-25 12:29:12.505064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.019 [2024-11-25 12:29:12.505072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:50.019 [2024-11-25 12:29:12.505097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.019 [2024-11-25 12:29:12.505105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:50.019 [2024-11-25 12:29:12.505118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:79488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.019 [2024-11-25 12:29:12.505125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:50.019 [2024-11-25 12:29:12.505138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.019 [2024-11-25 12:29:12.505145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:50.019 [2024-11-25 12:29:12.505158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.019 [2024-11-25 12:29:12.505165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:50.019 [2024-11-25 12:29:12.505178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:79512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.019 [2024-11-25 12:29:12.505186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:50.019 [2024-11-25 12:29:12.505198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.019 [2024-11-25 12:29:12.505206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:50.019 [2024-11-25 12:29:12.505219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:79528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.019 [2024-11-25 12:29:12.505226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:50.019 [2024-11-25 12:29:12.505239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.019 [2024-11-25 12:29:12.505247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:50.019 [2024-11-25 12:29:12.505260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.019 [2024-11-25 12:29:12.505267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:50.019 [2024-11-25 12:29:12.505280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.019 [2024-11-25 12:29:12.505288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:50.019 [2024-11-25 12:29:12.505301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:78808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.019 [2024-11-25 12:29:12.505308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:50.019 [2024-11-25 12:29:12.505321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.019 [2024-11-25 12:29:12.505328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:50.019 [2024-11-25 12:29:12.505345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.019 [2024-11-25 12:29:12.505354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:50.019 [2024-11-25 12:29:12.505481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.019 [2024-11-25 12:29:12.505492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:50.019 [2024-11-25 12:29:12.505509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.019 [2024-11-25 12:29:12.505517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:50.019 [2024-11-25 12:29:12.505534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:78848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.019 [2024-11-25 12:29:12.505541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:50.019 [2024-11-25 12:29:12.505557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:78856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.019 [2024-11-25 12:29:12.505565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:50.019 [2024-11-25 12:29:12.505582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.019 [2024-11-25 12:29:12.505589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:50.019 [2024-11-25 12:29:12.505606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:78872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.019 [2024-11-25 12:29:12.505613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:50.019 [2024-11-25 12:29:12.505629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:78880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.019 [2024-11-25 12:29:12.505637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:50.019 [2024-11-25 12:29:12.505653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.020 [2024-11-25 12:29:12.505661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:50.020 [2024-11-25 12:29:12.505677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.020 [2024-11-25 12:29:12.505685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:50.020 [2024-11-25 12:29:12.505702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.020 [2024-11-25 12:29:12.505709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:50.020 [2024-11-25 12:29:12.505726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.020 [2024-11-25 12:29:12.505734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:50.020 [2024-11-25 12:29:12.505759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.020 [2024-11-25 12:29:12.505770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:50.020 [2024-11-25 12:29:12.505787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.020 [2024-11-25 12:29:12.505795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:50.020 [2024-11-25 12:29:12.505811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.020 [2024-11-25 12:29:12.505818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:50.020 [2024-11-25 12:29:12.505834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.020 [2024-11-25 12:29:12.505842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:50.020 [2024-11-25 12:29:12.505858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.020 [2024-11-25 12:29:12.505866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:50.020 [2024-11-25 12:29:12.505882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:79592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.020 [2024-11-25 12:29:12.505890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:50.020 [2024-11-25 12:29:12.505907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:79600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.020 [2024-11-25 12:29:12.505915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:50.020 [2024-11-25 12:29:12.505931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.020 [2024-11-25 12:29:12.505939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:50.020 [2024-11-25 12:29:12.505955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.020 [2024-11-25 12:29:12.505962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:50.020 [2024-11-25 12:29:12.505979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.020 [2024-11-25 12:29:12.505986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:50.020 [2024-11-25 12:29:12.506003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.020 [2024-11-25 12:29:12.506010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:50.020 [2024-11-25 12:29:12.506027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.020 [2024-11-25 12:29:12.506035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:50.020 [2024-11-25 12:29:12.506051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.020 [2024-11-25 12:29:12.506061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:50.020 [2024-11-25 12:29:12.506077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:79656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.020 [2024-11-25 12:29:12.506085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:50.020 [2024-11-25 12:29:12.506108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.020 [2024-11-25 12:29:12.506122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:50.020 [2024-11-25 12:29:12.506139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.020 [2024-11-25 12:29:12.506146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:50.020 [2024-11-25 12:29:12.506168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.020 [2024-11-25 12:29:12.506176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:50.020 [2024-11-25 12:29:12.506192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:79688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.020 [2024-11-25 12:29:12.506199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:50.020 [2024-11-25 12:29:12.506216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.020 [2024-11-25 12:29:12.506223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:50.020 [2024-11-25 12:29:12.506239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.020 [2024-11-25 12:29:12.506247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:50.020 [2024-11-25 12:29:12.506263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.020 [2024-11-25 12:29:12.506270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:50.020 [2024-11-25 12:29:12.506287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.020 [2024-11-25 12:29:12.506295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:50.020 [2024-11-25 12:29:12.506311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.020 [2024-11-25 12:29:12.506318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:50.020 [2024-11-25 12:29:12.506334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.020 [2024-11-25 12:29:12.506342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:50.020 [2024-11-25 12:29:12.506358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.020 [2024-11-25 12:29:12.506365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:50.020 [2024-11-25 12:29:12.506385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:79752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.020 [2024-11-25 12:29:12.506393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:50.020 [2024-11-25 12:29:12.506409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.020 [2024-11-25 12:29:12.506416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:50.020 [2024-11-25 12:29:12.506432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.020 [2024-11-25 12:29:12.506440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:50.020 [2024-11-25 12:29:12.506457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.020 [2024-11-25 12:29:12.506465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:50.020 [2024-11-25 12:29:12.506481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.020 [2024-11-25 12:29:12.506488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:50.020 [2024-11-25 12:29:12.506504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.020 [2024-11-25 12:29:12.506511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:50.020 [2024-11-25 12:29:12.506528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.020 [2024-11-25 12:29:12.506535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:50.021 [2024-11-25 12:29:12.506637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.021 [2024-11-25 12:29:12.506647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:50.021 [2024-11-25 12:29:12.506667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.021 [2024-11-25 12:29:12.506675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:50.021 [2024-11-25 12:29:12.506693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.021 [2024-11-25 12:29:12.506702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:50.021 [2024-11-25 12:29:12.506720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:78928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.021 [2024-11-25 12:29:12.506728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:50.021 [2024-11-25 12:29:12.506747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:78936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.021 [2024-11-25 12:29:12.506765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:50.021 [2024-11-25 12:29:12.506788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.021 [2024-11-25 12:29:12.506796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:50.021 [2024-11-25 12:29:12.506815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.021 [2024-11-25 12:29:12.506822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:50.021 [2024-11-25 12:29:12.506841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:78960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.021 [2024-11-25 12:29:12.506848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:50.021 [2024-11-25 12:29:12.506867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.021 [2024-11-25 12:29:12.506874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:50.021 [2024-11-25 12:29:12.506893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:78976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.021 [2024-11-25 12:29:12.506900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:50.021 [2024-11-25 12:29:12.506919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:78984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.021 [2024-11-25 12:29:12.506927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:50.021 [2024-11-25 12:29:12.506946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.021 [2024-11-25 12:29:12.506953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:50.021 [2024-11-25 12:29:12.506972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:79000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.021 [2024-11-25 12:29:12.506979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:50.021 [2024-11-25 12:29:12.506998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.021 [2024-11-25 12:29:12.507005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:50.021 [2024-11-25 12:29:12.507024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:79016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.021 [2024-11-25 12:29:12.507031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:50.021 [2024-11-25 12:29:12.507049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:79024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.021 [2024-11-25 12:29:12.507057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:50.021 [2024-11-25 12:29:12.507077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:79032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.021 [2024-11-25 12:29:12.507085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:50.021 [2024-11-25 12:29:12.507103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.021 [2024-11-25 12:29:12.507113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.021 [2024-11-25 12:29:12.507133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:79048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.021 [2024-11-25 12:29:12.507140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.021 10873.95 IOPS, 42.48 MiB/s [2024-11-25T12:29:45.897Z] 10401.17 IOPS, 40.63 MiB/s [2024-11-25T12:29:45.897Z] 9967.79 IOPS, 38.94 MiB/s [2024-11-25T12:29:45.897Z] 9569.08 IOPS, 37.38 MiB/s [2024-11-25T12:29:45.897Z] 9201.04 IOPS, 35.94 MiB/s [2024-11-25T12:29:45.897Z] 8860.26 IOPS, 34.61 MiB/s [2024-11-25T12:29:45.897Z] 8543.82 IOPS, 33.37 MiB/s [2024-11-25T12:29:45.897Z] 8564.59 IOPS, 33.46 MiB/s [2024-11-25T12:29:45.897Z] 8689.27 IOPS, 33.94 MiB/s [2024-11-25T12:29:45.897Z] 8808.03 IOPS, 34.41 MiB/s [2024-11-25T12:29:45.897Z] 8917.75 IOPS, 34.83 MiB/s [2024-11-25T12:29:45.897Z] 8975.00 IOPS, 35.06 MiB/s [2024-11-25T12:29:45.897Z] 9071.24 IOPS, 35.43 MiB/s [2024-11-25T12:29:45.897Z] [2024-11-25 12:29:25.600480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.021 [2024-11-25 12:29:25.600517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:50.021 [2024-11-25 12:29:25.600551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:10160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.021 [2024-11-25 12:29:25.600560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:50.021 [2024-11-25 12:29:25.600573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.021 [2024-11-25 12:29:25.600581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:50.021 [2024-11-25 12:29:25.600593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.021 [2024-11-25 12:29:25.600600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:50.021 [2024-11-25 12:29:25.600613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:10184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.021 [2024-11-25 12:29:25.600621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:50.021 [2024-11-25 12:29:25.600633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.021 [2024-11-25 12:29:25.600640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:50.021 [2024-11-25 12:29:25.600653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:10200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.021 [2024-11-25 12:29:25.600660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:50.021 [2024-11-25 12:29:25.600672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.021 [2024-11-25 12:29:25.600679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:50.021 [2024-11-25 12:29:25.600691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:10216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.021 [2024-11-25 12:29:25.600698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:50.021 [2024-11-25 12:29:25.600711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.021 [2024-11-25 12:29:25.600735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:50.021 [2024-11-25 12:29:25.600757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:10232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.021 [2024-11-25 12:29:25.600765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:50.021 [2024-11-25 12:29:25.600778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.021 [2024-11-25 12:29:25.600785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:50.021 [2024-11-25 12:29:25.600797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.021 [2024-11-25 12:29:25.600805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:50.021 [2024-11-25 12:29:25.600817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.021 [2024-11-25 12:29:25.600824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:50.021 [2024-11-25 12:29:25.600836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.021 [2024-11-25 12:29:25.600843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:50.021 [2024-11-25 12:29:25.600855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.021 [2024-11-25 12:29:25.600863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:50.021 [2024-11-25 12:29:25.600875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.021 [2024-11-25 12:29:25.600883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:50.022 [2024-11-25 12:29:25.600895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.022 [2024-11-25 12:29:25.600902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:50.022 [2024-11-25 12:29:25.600914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.022 [2024-11-25 12:29:25.600921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:50.022 [2024-11-25 12:29:25.600934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:10304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.022 [2024-11-25 12:29:25.600941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.022 [2024-11-25 12:29:25.600953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.022 [2024-11-25 12:29:25.600960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:50.022 [2024-11-25 12:29:25.600973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.022 [2024-11-25 12:29:25.600983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:50.022 [2024-11-25 12:29:25.600996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.022 [2024-11-25 12:29:25.601003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:50.022 [2024-11-25 12:29:25.601015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.022 [2024-11-25 12:29:25.601022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:50.022 [2024-11-25 12:29:25.601035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.022 [2024-11-25 12:29:25.601042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:50.022 [2024-11-25 12:29:25.601214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.022 [2024-11-25 12:29:25.601224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.022 [2024-11-25 12:29:25.601233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.022 [2024-11-25 12:29:25.601240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.022 [2024-11-25 12:29:25.601248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:10368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.022 [2024-11-25 12:29:25.601255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.022 [2024-11-25 12:29:25.601263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.022 [2024-11-25 12:29:25.601269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.022 [2024-11-25 12:29:25.601277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.022 [2024-11-25 12:29:25.601284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.022 [2024-11-25 12:29:25.601292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:10392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.022 [2024-11-25 12:29:25.601299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.022 [2024-11-25 12:29:25.601307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:10400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.022 [2024-11-25 12:29:25.601314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.022 [2024-11-25 12:29:25.601322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:10408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.022 [2024-11-25 12:29:25.601329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.022 [2024-11-25 12:29:25.601338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:10416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.022 [2024-11-25 12:29:25.601344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.022 [2024-11-25 12:29:25.601357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:10424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.022 [2024-11-25 12:29:25.601364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.022 [2024-11-25 12:29:25.601372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:10432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.022 [2024-11-25 12:29:25.601379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.022 [2024-11-25 12:29:25.601387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.022 [2024-11-25 12:29:25.601394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.022 [2024-11-25 12:29:25.601402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:10448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.022 [2024-11-25 12:29:25.601409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.022 [2024-11-25 12:29:25.601417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.022 [2024-11-25 12:29:25.601424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.022 [2024-11-25 12:29:25.601432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:10464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.022 [2024-11-25 12:29:25.601439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.022 [2024-11-25 12:29:25.601447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.022 [2024-11-25 12:29:25.601454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.022 [2024-11-25 12:29:25.601462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:10480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.022 [2024-11-25 12:29:25.601469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.022 [2024-11-25 12:29:25.601477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.022 [2024-11-25 12:29:25.601484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.022 [2024-11-25 12:29:25.601492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.022 [2024-11-25 12:29:25.601499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.022 [2024-11-25 12:29:25.601507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.022 [2024-11-25 12:29:25.601514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.022 [2024-11-25 12:29:25.601522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.022 [2024-11-25 12:29:25.601529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.022 [2024-11-25 12:29:25.601537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.022 [2024-11-25 12:29:25.601546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.022 [2024-11-25 12:29:25.601555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.022 [2024-11-25 12:29:25.601562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.022 [2024-11-25 12:29:25.601569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:10536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.022 [2024-11-25 12:29:25.601576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.022 [2024-11-25 12:29:25.601585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.022 [2024-11-25 12:29:25.601592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.022 [2024-11-25 12:29:25.601600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.022 [2024-11-25 12:29:25.601607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.022 [2024-11-25 12:29:25.601615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.022 [2024-11-25 12:29:25.601621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.022 [2024-11-25 12:29:25.601629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:10568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.022 [2024-11-25 12:29:25.601636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.022 [2024-11-25 12:29:25.601644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:10576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.023 [2024-11-25 12:29:25.601651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.023 [2024-11-25 12:29:25.601659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:10584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.023 [2024-11-25 12:29:25.601666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.023 [2024-11-25 12:29:25.601674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:10592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.023 [2024-11-25 12:29:25.601681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.023 [2024-11-25 12:29:25.601689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:10600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.023 [2024-11-25 12:29:25.601696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.023 [2024-11-25 12:29:25.601704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.023 [2024-11-25 12:29:25.601711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.023 [2024-11-25 12:29:25.601719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:10616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.023 [2024-11-25 12:29:25.601726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.023 [2024-11-25 12:29:25.601734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.023 [2024-11-25 12:29:25.601744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.023 [2024-11-25 12:29:25.601762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.023 [2024-11-25 12:29:25.601769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.023 [2024-11-25 12:29:25.601777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.023 [2024-11-25 12:29:25.601784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.023 [2024-11-25 12:29:25.601792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:10648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.023 [2024-11-25 12:29:25.601799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.023 [2024-11-25 12:29:25.601807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.023 [2024-11-25 12:29:25.601814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.023 [2024-11-25 12:29:25.601822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.023 [2024-11-25 12:29:25.601833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.023 [2024-11-25 12:29:25.601841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:10672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.023 [2024-11-25 12:29:25.601853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.023 [2024-11-25 12:29:25.601861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.023 [2024-11-25 12:29:25.601868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.023 [2024-11-25 12:29:25.601876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:10688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.023 [2024-11-25 12:29:25.601883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.023 [2024-11-25 12:29:25.601891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.023 [2024-11-25 12:29:25.601897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.023 [2024-11-25 12:29:25.601906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:10704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.023 [2024-11-25 12:29:25.601912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.023 [2024-11-25 12:29:25.601920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:10712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.023 [2024-11-25 12:29:25.601927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.023 [2024-11-25 12:29:25.601935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.023 [2024-11-25 12:29:25.601942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.023 [2024-11-25 12:29:25.601955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:10728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.023 [2024-11-25 12:29:25.601962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.023 [2024-11-25 12:29:25.601970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.023 [2024-11-25 12:29:25.601977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.023 [2024-11-25 12:29:25.601984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.023 [2024-11-25 12:29:25.601991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.023 [2024-11-25 12:29:25.601999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.023 [2024-11-25 12:29:25.602006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.023 [2024-11-25 12:29:25.602014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.023 [2024-11-25 12:29:25.602021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.023 [2024-11-25 12:29:25.602029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.023 [2024-11-25 12:29:25.602035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.023 [2024-11-25 12:29:25.602043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:10776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.023 [2024-11-25 12:29:25.602050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.023 [2024-11-25 12:29:25.602058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:10784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.023 [2024-11-25 12:29:25.602065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.023 [2024-11-25 12:29:25.602073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.023 [2024-11-25 12:29:25.602081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.023 [2024-11-25 12:29:25.602090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.023 [2024-11-25 12:29:25.602104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.023 [2024-11-25 12:29:25.602112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.023 [2024-11-25 12:29:25.602119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.023 [2024-11-25 12:29:25.602127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:10816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.023 [2024-11-25 12:29:25.602134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.023 [2024-11-25 12:29:25.602142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.023 [2024-11-25 12:29:25.602152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.023 [2024-11-25 12:29:25.602161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.023 [2024-11-25 12:29:25.602168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.023 [2024-11-25 12:29:25.602176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:10840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.023 [2024-11-25 12:29:25.602183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.023 [2024-11-25 12:29:25.602191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.023 [2024-11-25 12:29:25.602197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.023 [2024-11-25 12:29:25.602206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.023 [2024-11-25 12:29:25.602213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.023 [2024-11-25 12:29:25.602221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:10864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.023 [2024-11-25 12:29:25.602228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.023 [2024-11-25 12:29:25.602236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:50.023 [2024-11-25 12:29:25.602242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.023 [2024-11-25 12:29:25.602264] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.023 [2024-11-25 12:29:25.602271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:8 PRP1 0x0 PRP2 0x0 00:30:50.023 [2024-11-25 12:29:25.602277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.023 [2024-11-25 12:29:25.602287] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.023 [2024-11-25 12:29:25.602292] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.023 [2024-11-25 12:29:25.602297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10888 len:8 PRP1 0x0 PRP2 0x0 00:30:50.024 [2024-11-25 12:29:25.602304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.024 [2024-11-25 12:29:25.602310] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.024 [2024-11-25 12:29:25.602316] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.024 [2024-11-25 12:29:25.602321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10896 len:8 PRP1 0x0 PRP2 0x0 00:30:50.024 [2024-11-25 12:29:25.602327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.024 [2024-11-25 12:29:25.602336] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.024 [2024-11-25 12:29:25.602341] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.024 [2024-11-25 12:29:25.602347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10904 len:8 PRP1 0x0 PRP2 0x0 00:30:50.024 [2024-11-25 12:29:25.602354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.024 [2024-11-25 12:29:25.602364] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.024 [2024-11-25 12:29:25.602369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.024 [2024-11-25 12:29:25.602374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:8 PRP1 0x0 PRP2 0x0 00:30:50.024 [2024-11-25 12:29:25.602381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.024 [2024-11-25 12:29:25.602387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.024 [2024-11-25 12:29:25.602392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.024 [2024-11-25 12:29:25.602397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10920 len:8 PRP1 0x0 PRP2 0x0 00:30:50.024 [2024-11-25 12:29:25.602404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.024 [2024-11-25 12:29:25.602411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.024 [2024-11-25 12:29:25.602416] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.024 [2024-11-25 12:29:25.602422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10928 len:8 PRP1 0x0 PRP2 0x0 00:30:50.024 [2024-11-25 12:29:25.602429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.024 [2024-11-25 12:29:25.602436] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.024 [2024-11-25 12:29:25.602441] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.024 [2024-11-25 12:29:25.602446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10936 len:8 PRP1 0x0 PRP2 0x0 00:30:50.024 [2024-11-25 12:29:25.602453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.024 [2024-11-25 12:29:25.602460] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.024 [2024-11-25 12:29:25.602465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.024 [2024-11-25 12:29:25.602470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:8 PRP1 0x0 PRP2 0x0 00:30:50.024 [2024-11-25 12:29:25.602476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.024 [2024-11-25 12:29:25.602483] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.024 [2024-11-25 12:29:25.602488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.024 [2024-11-25 12:29:25.602493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10952 len:8 PRP1 0x0 PRP2 0x0 00:30:50.024 [2024-11-25 12:29:25.602500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.024 [2024-11-25 12:29:25.602507] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.024 [2024-11-25 12:29:25.602512] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.024 [2024-11-25 12:29:25.602517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10960 len:8 PRP1 0x0 PRP2 0x0 00:30:50.024 [2024-11-25 12:29:25.602524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.024 [2024-11-25 12:29:25.602532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.024 [2024-11-25 12:29:25.602536] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.024 [2024-11-25 12:29:25.602543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10968 len:8 PRP1 0x0 PRP2 0x0 00:30:50.024 [2024-11-25 12:29:25.602552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.024 [2024-11-25 12:29:25.602560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.024 [2024-11-25 12:29:25.602565] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.024 [2024-11-25 12:29:25.602570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:8 PRP1 0x0 PRP2 0x0 00:30:50.024 [2024-11-25 12:29:25.602576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.024 [2024-11-25 12:29:25.602583] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.024 [2024-11-25 12:29:25.602588] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.024 [2024-11-25 12:29:25.602593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10984 len:8 PRP1 0x0 PRP2 0x0 00:30:50.024 [2024-11-25 12:29:25.602600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.024 [2024-11-25 12:29:25.602607] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.024 [2024-11-25 12:29:25.602612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.024 [2024-11-25 12:29:25.602616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10992 len:8 PRP1 0x0 PRP2 0x0 00:30:50.024 [2024-11-25 12:29:25.602623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.024 [2024-11-25 12:29:25.602630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.024 [2024-11-25 12:29:25.602635] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.024 [2024-11-25 12:29:25.602640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11000 len:8 PRP1 0x0 PRP2 0x0 00:30:50.024 [2024-11-25 12:29:25.602646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.024 [2024-11-25 12:29:25.602654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.024 [2024-11-25 12:29:25.602658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.024 [2024-11-25 12:29:25.602663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:8 PRP1 0x0 PRP2 0x0 00:30:50.024 [2024-11-25 12:29:25.602670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.024 [2024-11-25 12:29:25.602677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.024 [2024-11-25 12:29:25.602682] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.024 [2024-11-25 12:29:25.602687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11016 len:8 PRP1 0x0 PRP2 0x0 00:30:50.024 [2024-11-25 12:29:25.602693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.024 [2024-11-25 12:29:25.602700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.024 [2024-11-25 12:29:25.602705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.024 [2024-11-25 12:29:25.602710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11024 len:8 PRP1 0x0 PRP2 0x0 00:30:50.024 [2024-11-25 12:29:25.602717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.024 [2024-11-25 12:29:25.602724] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.024 [2024-11-25 12:29:25.602729] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.024 [2024-11-25 12:29:25.602738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11032 len:8 PRP1 0x0 PRP2 0x0 00:30:50.024 [2024-11-25 12:29:25.602746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.024 [2024-11-25 12:29:25.602761] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.024 [2024-11-25 12:29:25.602766] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.024 [2024-11-25 12:29:25.602771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:8 PRP1 0x0 PRP2 0x0 00:30:50.024 [2024-11-25 12:29:25.602778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.024 [2024-11-25 12:29:25.602785] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.024 [2024-11-25 12:29:25.602790] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.024 [2024-11-25 12:29:25.602795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11048 len:8 PRP1 0x0 PRP2 0x0 00:30:50.024 [2024-11-25 12:29:25.602801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.024 [2024-11-25 12:29:25.602809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.024 [2024-11-25 12:29:25.602814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.024 [2024-11-25 12:29:25.602819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11056 len:8 PRP1 0x0 PRP2 0x0 00:30:50.024 [2024-11-25 12:29:25.602825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.025 [2024-11-25 12:29:25.602833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.025 [2024-11-25 12:29:25.602837] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.025 [2024-11-25 12:29:25.602842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11064 len:8 PRP1 0x0 PRP2 0x0 00:30:50.025 [2024-11-25 12:29:25.602849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.025 [2024-11-25 12:29:25.602856] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.025 [2024-11-25 12:29:25.602861] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.025 [2024-11-25 12:29:25.613068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:8 PRP1 0x0 PRP2 0x0 00:30:50.025 [2024-11-25 12:29:25.613102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.025 [2024-11-25 12:29:25.613119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.025 [2024-11-25 12:29:25.613126] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.025 [2024-11-25 12:29:25.613134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11080 len:8 PRP1 0x0 PRP2 0x0 00:30:50.025 [2024-11-25 12:29:25.613143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.025 [2024-11-25 12:29:25.613152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.025 [2024-11-25 12:29:25.613158] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.025 [2024-11-25 12:29:25.613164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11088 len:8 PRP1 0x0 PRP2 0x0 00:30:50.025 [2024-11-25 12:29:25.613172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.025 [2024-11-25 12:29:25.613193] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.025 [2024-11-25 12:29:25.613200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.025 [2024-11-25 12:29:25.613207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11096 len:8 PRP1 0x0 PRP2 0x0 00:30:50.025 [2024-11-25 12:29:25.613216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.025 [2024-11-25 12:29:25.613225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.025 [2024-11-25 12:29:25.613231] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.025 [2024-11-25 12:29:25.613237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:8 PRP1 0x0 PRP2 0x0 00:30:50.025 [2024-11-25 12:29:25.613245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.025 [2024-11-25 12:29:25.613254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.025 [2024-11-25 12:29:25.613260] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.025 [2024-11-25 12:29:25.613267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11112 len:8 PRP1 0x0 PRP2 0x0 00:30:50.025 [2024-11-25 12:29:25.613276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.025 [2024-11-25 12:29:25.613284] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.025 [2024-11-25 12:29:25.613290] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.025 [2024-11-25 12:29:25.613297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11120 len:8 PRP1 0x0 PRP2 0x0 00:30:50.025 [2024-11-25 12:29:25.613305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.025 [2024-11-25 12:29:25.613314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.025 [2024-11-25 12:29:25.613320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.025 [2024-11-25 12:29:25.613326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11128 len:8 PRP1 0x0 PRP2 0x0 00:30:50.025 [2024-11-25 12:29:25.613334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.025 [2024-11-25 12:29:25.613343] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.025 [2024-11-25 12:29:25.613349] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.025 [2024-11-25 12:29:25.613355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:8 PRP1 0x0 PRP2 0x0 00:30:50.025 [2024-11-25 12:29:25.613364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.025 [2024-11-25 12:29:25.613373] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.025 [2024-11-25 12:29:25.613378] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.025 [2024-11-25 12:29:25.613385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11144 len:8 PRP1 0x0 PRP2 0x0 00:30:50.025 [2024-11-25 12:29:25.613393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.025 [2024-11-25 12:29:25.613402] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.025 [2024-11-25 12:29:25.613408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.025 [2024-11-25 12:29:25.613414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11152 len:8 PRP1 0x0 PRP2 0x0 00:30:50.025 [2024-11-25 12:29:25.613426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.025 [2024-11-25 12:29:25.613436] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.025 [2024-11-25 12:29:25.613442] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.025 [2024-11-25 12:29:25.613449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11160 len:8 PRP1 0x0 PRP2 0x0 00:30:50.025 [2024-11-25 12:29:25.613457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.025 [2024-11-25 12:29:25.613466] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:50.025 [2024-11-25 12:29:25.613472] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:50.025 [2024-11-25 12:29:25.613478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:8 PRP1 0x0 PRP2 0x0 00:30:50.025 [2024-11-25 12:29:25.613487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.025 [2024-11-25 12:29:25.613613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:50.025 [2024-11-25 12:29:25.613629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.025 [2024-11-25 12:29:25.613641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:50.025 [2024-11-25 12:29:25.613649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.025 [2024-11-25 12:29:25.613658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:50.025 [2024-11-25 12:29:25.613667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.025 [2024-11-25 12:29:25.613676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:50.025 [2024-11-25 12:29:25.613685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.025 [2024-11-25 12:29:25.613695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:50.025 [2024-11-25 12:29:25.613704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:50.025 [2024-11-25 12:29:25.613718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10c4190 is same with the state(6) to be set 00:30:50.025 [2024-11-25 12:29:25.615199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:50.025 [2024-11-25 12:29:25.615228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10c4190 (9): Bad file descriptor 00:30:50.025 [2024-11-25 12:29:25.615465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:50.025 [2024-11-25 12:29:25.615482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10c4190 with addr=10.0.0.3, port=4421 00:30:50.025 [2024-11-25 12:29:25.615492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10c4190 is same with the state(6) to be set 00:30:50.025 [2024-11-25 12:29:25.615591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10c4190 (9): Bad file descriptor 00:30:50.025 [2024-11-25 12:29:25.615620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:50.025 [2024-11-25 12:29:25.615630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:50.025 [2024-11-25 12:29:25.615648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:50.025 [2024-11-25 12:29:25.615657] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:50.025 [2024-11-25 12:29:25.615667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:50.025 9135.43 IOPS, 35.69 MiB/s [2024-11-25T12:29:45.901Z] 9182.31 IOPS, 35.87 MiB/s [2024-11-25T12:29:45.901Z] 9233.86 IOPS, 36.07 MiB/s [2024-11-25T12:29:45.901Z] 9273.18 IOPS, 36.22 MiB/s [2024-11-25T12:29:45.901Z] 9309.82 IOPS, 36.37 MiB/s [2024-11-25T12:29:45.901Z] 9333.08 IOPS, 36.46 MiB/s [2024-11-25T12:29:45.901Z] 9352.83 IOPS, 36.53 MiB/s [2024-11-25T12:29:45.901Z] 9389.81 IOPS, 36.68 MiB/s [2024-11-25T12:29:45.901Z] 9414.84 IOPS, 36.78 MiB/s [2024-11-25T12:29:45.901Z] 9437.39 IOPS, 36.86 MiB/s [2024-11-25T12:29:45.901Z] [2024-11-25 12:29:35.663628] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:30:50.025 9487.13 IOPS, 37.06 MiB/s [2024-11-25T12:29:45.901Z] 9543.50 IOPS, 37.28 MiB/s [2024-11-25T12:29:45.901Z] 9599.68 IOPS, 37.50 MiB/s [2024-11-25T12:29:45.901Z] 9644.90 IOPS, 37.68 MiB/s [2024-11-25T12:29:45.901Z] 9640.49 IOPS, 37.66 MiB/s [2024-11-25T12:29:45.901Z] 9638.62 IOPS, 37.65 MiB/s [2024-11-25T12:29:45.901Z] 9636.49 IOPS, 37.64 MiB/s [2024-11-25T12:29:45.901Z] 9668.69 IOPS, 37.77 MiB/s [2024-11-25T12:29:45.901Z] 9710.19 IOPS, 37.93 MiB/s [2024-11-25T12:29:45.901Z] 9757.48 IOPS, 38.12 MiB/s [2024-11-25T12:29:45.901Z] Received shutdown signal, test time was about 54.292278 seconds 00:30:50.025 00:30:50.025 Latency(us) 00:30:50.025 [2024-11-25T12:29:45.901Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:50.025 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:30:50.026 Verification LBA range: start 0x0 length 0x4000 00:30:50.026 Nvme0n1 : 54.29 9770.58 38.17 0.00 0.00 13077.81 304.05 7020619.62 00:30:50.026 [2024-11-25T12:29:45.902Z] =================================================================================================================== 00:30:50.026 [2024-11-25T12:29:45.902Z] Total : 9770.58 38.17 0.00 0.00 13077.81 304.05 7020619.62 00:30:50.026 12:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:50.026 12:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:30:50.026 12:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:30:50.284 12:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:30:50.284 12:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:50.284 12:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:30:50.284 12:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:50.284 12:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:30:50.284 12:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:50.284 12:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:50.284 rmmod nvme_tcp 00:30:50.284 rmmod nvme_fabrics 00:30:50.284 rmmod nvme_keyring 00:30:50.284 12:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:50.284 12:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:30:50.284 12:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:30:50.284 12:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 94906 ']' 00:30:50.284 12:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 94906 00:30:50.284 12:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 94906 ']' 00:30:50.284 12:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 94906 00:30:50.284 12:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:30:50.284 12:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:50.284 12:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 94906 00:30:50.284 killing process with pid 94906 00:30:50.284 12:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:50.284 12:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:50.284 12:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 94906' 00:30:50.284 12:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 94906 00:30:50.284 12:29:45 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 94906 00:30:50.284 12:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:50.284 12:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:50.284 12:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:50.284 12:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:30:50.284 12:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:30:50.284 12:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:50.284 12:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:30:50.284 12:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:50.284 12:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:30:50.284 12:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:30:50.284 12:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:30:50.284 12:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:30:50.541 12:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:30:50.541 12:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:30:50.541 12:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:30:50.541 12:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:30:50.541 12:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:30:50.541 12:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:30:50.541 12:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:30:50.541 12:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:30:50.541 12:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:50.541 12:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:50.541 12:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:30:50.541 12:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:50.541 12:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:50.541 12:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:50.541 12:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:30:50.541 00:30:50.541 real 0m59.643s 00:30:50.541 user 2m51.218s 00:30:50.541 sys 0m9.730s 00:30:50.541 12:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:50.541 12:29:46 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:30:50.541 ************************************ 00:30:50.541 END TEST nvmf_host_multipath 00:30:50.541 ************************************ 00:30:50.541 12:29:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:30:50.541 12:29:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:50.541 12:29:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:50.541 12:29:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:50.541 ************************************ 00:30:50.541 START TEST nvmf_timeout 00:30:50.541 ************************************ 00:30:50.541 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:30:50.799 * Looking for test storage... 00:30:50.799 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:30:50.799 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:50.799 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:30:50.799 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:50.799 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:50.799 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:50.799 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:50.799 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:50.799 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:30:50.799 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:30:50.799 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:30:50.799 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:30:50.799 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:30:50.799 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:30:50.799 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:50.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:50.800 --rc genhtml_branch_coverage=1 00:30:50.800 --rc genhtml_function_coverage=1 00:30:50.800 --rc genhtml_legend=1 00:30:50.800 --rc geninfo_all_blocks=1 00:30:50.800 --rc geninfo_unexecuted_blocks=1 00:30:50.800 00:30:50.800 ' 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:50.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:50.800 --rc genhtml_branch_coverage=1 00:30:50.800 --rc genhtml_function_coverage=1 00:30:50.800 --rc genhtml_legend=1 00:30:50.800 --rc geninfo_all_blocks=1 00:30:50.800 --rc geninfo_unexecuted_blocks=1 00:30:50.800 00:30:50.800 ' 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:50.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:50.800 --rc genhtml_branch_coverage=1 00:30:50.800 --rc genhtml_function_coverage=1 00:30:50.800 --rc genhtml_legend=1 00:30:50.800 --rc geninfo_all_blocks=1 00:30:50.800 --rc geninfo_unexecuted_blocks=1 00:30:50.800 00:30:50.800 ' 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:50.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:50.800 --rc genhtml_branch_coverage=1 00:30:50.800 --rc genhtml_function_coverage=1 00:30:50.800 --rc genhtml_legend=1 00:30:50.800 --rc geninfo_all_blocks=1 00:30:50.800 --rc geninfo_unexecuted_blocks=1 00:30:50.800 00:30:50.800 ' 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:50.800 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:50.800 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:30:50.801 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:30:50.801 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:30:50.801 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:50.801 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:30:50.801 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:50.801 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:30:50.801 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:50.801 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:30:50.801 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:50.801 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:50.801 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:50.801 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:50.801 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:50.801 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:50.801 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:30:50.801 Cannot find device "nvmf_init_br" 00:30:50.801 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:30:50.801 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:30:50.801 Cannot find device "nvmf_init_br2" 00:30:50.801 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:30:50.801 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:30:50.801 Cannot find device "nvmf_tgt_br" 00:30:50.801 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:30:50.801 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:30:50.801 Cannot find device "nvmf_tgt_br2" 00:30:50.801 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:30:50.801 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:30:50.801 Cannot find device "nvmf_init_br" 00:30:50.801 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:30:50.801 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:30:50.801 Cannot find device "nvmf_init_br2" 00:30:50.801 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:30:50.801 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:30:50.801 Cannot find device "nvmf_tgt_br" 00:30:50.801 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:30:50.801 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:30:50.801 Cannot find device "nvmf_tgt_br2" 00:30:50.801 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:30:50.801 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:30:50.801 Cannot find device "nvmf_br" 00:30:50.801 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:30:50.801 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:30:50.801 Cannot find device "nvmf_init_if" 00:30:50.801 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:30:50.801 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:30:51.059 Cannot find device "nvmf_init_if2" 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:51.059 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:51.059 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:30:51.059 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:51.059 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:30:51.059 00:30:51.059 --- 10.0.0.3 ping statistics --- 00:30:51.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:51.059 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:30:51.059 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:30:51.059 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.024 ms 00:30:51.059 00:30:51.059 --- 10.0.0.4 ping statistics --- 00:30:51.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:51.059 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:51.059 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:51.059 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:30:51.059 00:30:51.059 --- 10.0.0.1 ping statistics --- 00:30:51.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:51.059 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:30:51.059 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:51.059 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:30:51.059 00:30:51.059 --- 10.0.0.2 ping statistics --- 00:30:51.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:51.059 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=96321 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 96321 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 96321 ']' 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:51.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:51.059 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:30:51.060 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:51.060 12:29:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:51.060 [2024-11-25 12:29:46.928222] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:30:51.060 [2024-11-25 12:29:46.928281] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:51.317 [2024-11-25 12:29:47.066866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:51.317 [2024-11-25 12:29:47.116266] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:51.317 [2024-11-25 12:29:47.116327] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:51.317 [2024-11-25 12:29:47.116338] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:51.317 [2024-11-25 12:29:47.116346] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:51.317 [2024-11-25 12:29:47.116354] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:51.317 [2024-11-25 12:29:47.117244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:51.317 [2024-11-25 12:29:47.117263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:51.574 12:29:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:51.574 12:29:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:30:51.574 12:29:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:51.574 12:29:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:51.574 12:29:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:51.574 12:29:47 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:51.574 12:29:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:51.574 12:29:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:51.574 [2024-11-25 12:29:47.433699] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:51.830 12:29:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:51.830 Malloc0 00:30:51.830 12:29:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:52.115 12:29:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:52.377 12:29:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:30:52.635 [2024-11-25 12:29:48.282674] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:30:52.635 12:29:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=96399 00:30:52.635 12:29:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:30:52.635 12:29:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 96399 /var/tmp/bdevperf.sock 00:30:52.635 12:29:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 96399 ']' 00:30:52.635 12:29:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:52.635 12:29:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:52.635 12:29:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:52.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:52.635 12:29:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:52.635 12:29:48 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:30:52.635 [2024-11-25 12:29:48.336120] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:30:52.635 [2024-11-25 12:29:48.336181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96399 ] 00:30:52.635 [2024-11-25 12:29:48.476238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.892 [2024-11-25 12:29:48.512380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:53.457 12:29:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:53.457 12:29:49 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:30:53.457 12:29:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:30:53.716 12:29:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:30:53.974 NVMe0n1 00:30:53.974 12:29:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:53.974 12:29:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=96446 00:30:53.974 12:29:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:30:53.974 Running I/O for 10 seconds... 00:30:54.926 12:29:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:30:55.187 12209.00 IOPS, 47.69 MiB/s [2024-11-25T12:29:51.063Z] [2024-11-25 12:29:50.891477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbbc90 is same with the state(6) to be set 00:30:55.187 [2024-11-25 12:29:50.891614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbbc90 is same with the state(6) to be set 00:30:55.187 [2024-11-25 12:29:50.891661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbbc90 is same with the state(6) to be set 00:30:55.187 [2024-11-25 12:29:50.891682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbbc90 is same with the state(6) to be set 00:30:55.187 [2024-11-25 12:29:50.891701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbbc90 is same with the state(6) to be set 00:30:55.187 [2024-11-25 12:29:50.891721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbbc90 is same with the state(6) to be set 00:30:55.187 [2024-11-25 12:29:50.891741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbbc90 is same with the state(6) to be set 00:30:55.187 [2024-11-25 12:29:50.891774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbbc90 is same with the state(6) to be set 00:30:55.187 [2024-11-25 12:29:50.891819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbbc90 is same with the state(6) to be set 00:30:55.187 [2024-11-25 12:29:50.891839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbbc90 is same with the state(6) to be set 00:30:55.187 [2024-11-25 12:29:50.891859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbbc90 is same with the state(6) to be set 00:30:55.187 [2024-11-25 12:29:50.891879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbbc90 is same with the state(6) to be set 00:30:55.187 [2024-11-25 12:29:50.891899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbbc90 is same with the state(6) to be set 00:30:55.187 [2024-11-25 12:29:50.891947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbbc90 is same with the state(6) to be set 00:30:55.187 [2024-11-25 12:29:50.892170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbbc90 is same with the state(6) to be set 00:30:55.187 [2024-11-25 12:29:50.892192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbbc90 is same with the state(6) to be set 00:30:55.187 [2024-11-25 12:29:50.892213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbbc90 is same with the state(6) to be set 00:30:55.187 [2024-11-25 12:29:50.892232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbbc90 is same with the state(6) to be set 00:30:55.187 [2024-11-25 12:29:50.892251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbbc90 is same with the state(6) to be set 00:30:55.187 [2024-11-25 12:29:50.892270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbbc90 is same with the state(6) to be set 00:30:55.187 [2024-11-25 12:29:50.892290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbbc90 is same with the state(6) to be set 00:30:55.187 [2024-11-25 12:29:50.892309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbbc90 is same with the state(6) to be set 00:30:55.187 [2024-11-25 12:29:50.892328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbbc90 is same with the state(6) to be set 00:30:55.187 [2024-11-25 12:29:50.892348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbbc90 is same with the state(6) to be set 00:30:55.187 [2024-11-25 12:29:50.892367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbbc90 is same with the state(6) to be set 00:30:55.187 [2024-11-25 12:29:50.892387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbbc90 is same with the state(6) to be set 00:30:55.187 [2024-11-25 12:29:50.892406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbbc90 is same with the state(6) to be set 00:30:55.187 [2024-11-25 12:29:50.892425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbbc90 is same with the state(6) to be set 00:30:55.187 [2024-11-25 12:29:50.892489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbbc90 is same with the state(6) to be set 00:30:55.187 [2024-11-25 12:29:50.892509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbbc90 is same with the state(6) to be set 00:30:55.187 [2024-11-25 12:29:50.892528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbbc90 is same with the state(6) to be set 00:30:55.187 [2024-11-25 12:29:50.892609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbbc90 is same with the state(6) to be set 00:30:55.187 [2024-11-25 12:29:50.892616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbbc90 is same with the state(6) to be set 00:30:55.187 [2024-11-25 12:29:50.892620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbbc90 is same with the state(6) to be set 00:30:55.187 [2024-11-25 12:29:50.892624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbbc90 is same with the state(6) to be set 00:30:55.187 [2024-11-25 12:29:50.892628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbbc90 is same with the state(6) to be set 00:30:55.187 [2024-11-25 12:29:50.892632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbbc90 is same with the state(6) to be set 00:30:55.187 [2024-11-25 12:29:50.892637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbbc90 is same with the state(6) to be set 00:30:55.187 [2024-11-25 12:29:50.892641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbbc90 is same with the state(6) to be set 00:30:55.187 [2024-11-25 12:29:50.893287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:108784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.187 [2024-11-25 12:29:50.893317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.187 [2024-11-25 12:29:50.893331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:108792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.187 [2024-11-25 12:29:50.893338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.187 [2024-11-25 12:29:50.893346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:108800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.187 [2024-11-25 12:29:50.893352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.187 [2024-11-25 12:29:50.893360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:108808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.187 [2024-11-25 12:29:50.893366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.187 [2024-11-25 12:29:50.893373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:108816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.187 [2024-11-25 12:29:50.893378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.187 [2024-11-25 12:29:50.893386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:108824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.188 [2024-11-25 12:29:50.893392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.188 [2024-11-25 12:29:50.893399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:108832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.188 [2024-11-25 12:29:50.893404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.188 [2024-11-25 12:29:50.893412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:108840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.188 [2024-11-25 12:29:50.893417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.188 [2024-11-25 12:29:50.893425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:109480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.188 [2024-11-25 12:29:50.893431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.188 [2024-11-25 12:29:50.893439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:108848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.188 [2024-11-25 12:29:50.893445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.188 [2024-11-25 12:29:50.893452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:108856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.188 [2024-11-25 12:29:50.893458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.188 [2024-11-25 12:29:50.893465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:108864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.188 [2024-11-25 12:29:50.893471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.188 [2024-11-25 12:29:50.893478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:108872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.188 [2024-11-25 12:29:50.893483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.188 [2024-11-25 12:29:50.893491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:108880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.188 [2024-11-25 12:29:50.893496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.188 [2024-11-25 12:29:50.893504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:108888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.188 [2024-11-25 12:29:50.893509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.188 [2024-11-25 12:29:50.893516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:108896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.188 [2024-11-25 12:29:50.893524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.188 [2024-11-25 12:29:50.893531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:108904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.188 [2024-11-25 12:29:50.893538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.188 [2024-11-25 12:29:50.893545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.188 [2024-11-25 12:29:50.893551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.188 [2024-11-25 12:29:50.893558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:108920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.188 [2024-11-25 12:29:50.893564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.188 [2024-11-25 12:29:50.893571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.188 [2024-11-25 12:29:50.893577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.188 [2024-11-25 12:29:50.893584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:108936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.188 [2024-11-25 12:29:50.893589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.188 [2024-11-25 12:29:50.893597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:108944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.188 [2024-11-25 12:29:50.893603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.188 [2024-11-25 12:29:50.893611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:108952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.188 [2024-11-25 12:29:50.893616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.188 [2024-11-25 12:29:50.893624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:108960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.188 [2024-11-25 12:29:50.893629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.188 [2024-11-25 12:29:50.893636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:108968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.188 [2024-11-25 12:29:50.893642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.188 [2024-11-25 12:29:50.893649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:108976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.188 [2024-11-25 12:29:50.893655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.188 [2024-11-25 12:29:50.893662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:108984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.188 [2024-11-25 12:29:50.893668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.188 [2024-11-25 12:29:50.893675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:108992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.188 [2024-11-25 12:29:50.893680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.188 [2024-11-25 12:29:50.893688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:109000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.188 [2024-11-25 12:29:50.893693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.188 [2024-11-25 12:29:50.893701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:109008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.188 [2024-11-25 12:29:50.893706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.188 [2024-11-25 12:29:50.893714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.188 [2024-11-25 12:29:50.893719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.188 [2024-11-25 12:29:50.893726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:109024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.188 [2024-11-25 12:29:50.893732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.188 [2024-11-25 12:29:50.893740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:109032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.188 [2024-11-25 12:29:50.893746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.188 [2024-11-25 12:29:50.893763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:109040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.188 [2024-11-25 12:29:50.893768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.188 [2024-11-25 12:29:50.893776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:109048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.188 [2024-11-25 12:29:50.893782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.188 [2024-11-25 12:29:50.893790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:109056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.188 [2024-11-25 12:29:50.893795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.188 [2024-11-25 12:29:50.893803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:109064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.188 [2024-11-25 12:29:50.893808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.188 [2024-11-25 12:29:50.893816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:109072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.188 [2024-11-25 12:29:50.893821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.188 [2024-11-25 12:29:50.893829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:109080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.188 [2024-11-25 12:29:50.893834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.188 [2024-11-25 12:29:50.893841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:109088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.188 [2024-11-25 12:29:50.893847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.188 [2024-11-25 12:29:50.893854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:109096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.188 [2024-11-25 12:29:50.893860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.188 [2024-11-25 12:29:50.893867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:109104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.188 [2024-11-25 12:29:50.893873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.188 [2024-11-25 12:29:50.893880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:109112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.188 [2024-11-25 12:29:50.893891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.188 [2024-11-25 12:29:50.893898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:109120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.188 [2024-11-25 12:29:50.893904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.188 [2024-11-25 12:29:50.893911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:109128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.188 [2024-11-25 12:29:50.893916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.189 [2024-11-25 12:29:50.893924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:109136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.189 [2024-11-25 12:29:50.893929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.189 [2024-11-25 12:29:50.893936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:109144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.189 [2024-11-25 12:29:50.893942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.189 [2024-11-25 12:29:50.893949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:109152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.189 [2024-11-25 12:29:50.893955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.189 [2024-11-25 12:29:50.893962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:109160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.189 [2024-11-25 12:29:50.893968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.189 [2024-11-25 12:29:50.893976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:109168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.189 [2024-11-25 12:29:50.893981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.189 [2024-11-25 12:29:50.893989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:109176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.189 [2024-11-25 12:29:50.893994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.189 [2024-11-25 12:29:50.894001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:109184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.189 [2024-11-25 12:29:50.894008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.189 [2024-11-25 12:29:50.894015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:109192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.189 [2024-11-25 12:29:50.894020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.189 [2024-11-25 12:29:50.894028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:109200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.189 [2024-11-25 12:29:50.894034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.189 [2024-11-25 12:29:50.894041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:109208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.189 [2024-11-25 12:29:50.894047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.189 [2024-11-25 12:29:50.894054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:109216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.189 [2024-11-25 12:29:50.894059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.189 [2024-11-25 12:29:50.894066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:109224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.189 [2024-11-25 12:29:50.894072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.189 [2024-11-25 12:29:50.894088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:109232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.189 [2024-11-25 12:29:50.894093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.189 [2024-11-25 12:29:50.894101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:109240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.189 [2024-11-25 12:29:50.894106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.189 [2024-11-25 12:29:50.894113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:109248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.189 [2024-11-25 12:29:50.894118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.189 [2024-11-25 12:29:50.894126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:109256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.189 [2024-11-25 12:29:50.894131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.189 [2024-11-25 12:29:50.894138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:109264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.189 [2024-11-25 12:29:50.894143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.189 [2024-11-25 12:29:50.894151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:109272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.189 [2024-11-25 12:29:50.894156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.189 [2024-11-25 12:29:50.894163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:109280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.189 [2024-11-25 12:29:50.894170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.189 [2024-11-25 12:29:50.894177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:109288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.189 [2024-11-25 12:29:50.894189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.189 [2024-11-25 12:29:50.894197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:109296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.189 [2024-11-25 12:29:50.894203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.189 [2024-11-25 12:29:50.894210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:109304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.189 [2024-11-25 12:29:50.894215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.189 [2024-11-25 12:29:50.894222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:109312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.189 [2024-11-25 12:29:50.894228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.189 [2024-11-25 12:29:50.894235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:109320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.189 [2024-11-25 12:29:50.894240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.189 [2024-11-25 12:29:50.894247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:109328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.189 [2024-11-25 12:29:50.894253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.189 [2024-11-25 12:29:50.894260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:109336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.189 [2024-11-25 12:29:50.894266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.189 [2024-11-25 12:29:50.894273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.189 [2024-11-25 12:29:50.894278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.189 [2024-11-25 12:29:50.894285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:109352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.189 [2024-11-25 12:29:50.894291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.189 [2024-11-25 12:29:50.894298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:109360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.189 [2024-11-25 12:29:50.894303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.189 [2024-11-25 12:29:50.894311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:109368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.189 [2024-11-25 12:29:50.894316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.189 [2024-11-25 12:29:50.894323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:109376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.189 [2024-11-25 12:29:50.894328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.189 [2024-11-25 12:29:50.894336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:109384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.189 [2024-11-25 12:29:50.894341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.189 [2024-11-25 12:29:50.894349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.189 [2024-11-25 12:29:50.894354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.189 [2024-11-25 12:29:50.894362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:109400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.189 [2024-11-25 12:29:50.894367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.189 [2024-11-25 12:29:50.894374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:109408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.189 [2024-11-25 12:29:50.894380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.189 [2024-11-25 12:29:50.894387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:109416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.189 [2024-11-25 12:29:50.894394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.189 [2024-11-25 12:29:50.894401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:109424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.189 [2024-11-25 12:29:50.894407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.189 [2024-11-25 12:29:50.894414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:109432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.189 [2024-11-25 12:29:50.894419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.189 [2024-11-25 12:29:50.894427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:109440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.189 [2024-11-25 12:29:50.894432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.189 [2024-11-25 12:29:50.894439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:109448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.190 [2024-11-25 12:29:50.894444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.190 [2024-11-25 12:29:50.894452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:109456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.190 [2024-11-25 12:29:50.894457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.190 [2024-11-25 12:29:50.894464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:109464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.190 [2024-11-25 12:29:50.894470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.190 [2024-11-25 12:29:50.894477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:109472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:55.190 [2024-11-25 12:29:50.894482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.190 [2024-11-25 12:29:50.894489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:109488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.190 [2024-11-25 12:29:50.894495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.190 [2024-11-25 12:29:50.894502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:109496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.190 [2024-11-25 12:29:50.894508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.190 [2024-11-25 12:29:50.894515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:109504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.190 [2024-11-25 12:29:50.894520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.190 [2024-11-25 12:29:50.894528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:109512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.190 [2024-11-25 12:29:50.894533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.190 [2024-11-25 12:29:50.894541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:109520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.190 [2024-11-25 12:29:50.894546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.190 [2024-11-25 12:29:50.894554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:109528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.190 [2024-11-25 12:29:50.894559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.190 [2024-11-25 12:29:50.894566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:109536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.190 [2024-11-25 12:29:50.894571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.190 [2024-11-25 12:29:50.894578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:109544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.190 [2024-11-25 12:29:50.894587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.190 [2024-11-25 12:29:50.894594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:109552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.190 [2024-11-25 12:29:50.894601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.190 [2024-11-25 12:29:50.894609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:109560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.190 [2024-11-25 12:29:50.894614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.190 [2024-11-25 12:29:50.894622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:109568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.190 [2024-11-25 12:29:50.894627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.190 [2024-11-25 12:29:50.894634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:109576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.190 [2024-11-25 12:29:50.894640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.190 [2024-11-25 12:29:50.894647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:109584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.190 [2024-11-25 12:29:50.894653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.190 [2024-11-25 12:29:50.894660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:109592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.190 [2024-11-25 12:29:50.894666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.190 [2024-11-25 12:29:50.894673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:109600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.190 [2024-11-25 12:29:50.894678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.190 [2024-11-25 12:29:50.894685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:109608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.190 [2024-11-25 12:29:50.894693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.190 [2024-11-25 12:29:50.894700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:109616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.190 [2024-11-25 12:29:50.894705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.190 [2024-11-25 12:29:50.894712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:109624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.190 [2024-11-25 12:29:50.894718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.190 [2024-11-25 12:29:50.894725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:109632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.190 [2024-11-25 12:29:50.894730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.190 [2024-11-25 12:29:50.894737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:109640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.190 [2024-11-25 12:29:50.894742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.190 [2024-11-25 12:29:50.894757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:109648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.190 [2024-11-25 12:29:50.894763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.190 [2024-11-25 12:29:50.894770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:109656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.190 [2024-11-25 12:29:50.894775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.190 [2024-11-25 12:29:50.894782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:109664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.190 [2024-11-25 12:29:50.894787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.190 [2024-11-25 12:29:50.894794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:109672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.190 [2024-11-25 12:29:50.894801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.190 [2024-11-25 12:29:50.894808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:109680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.190 [2024-11-25 12:29:50.894815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.190 [2024-11-25 12:29:50.894823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:109688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.190 [2024-11-25 12:29:50.894829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.190 [2024-11-25 12:29:50.894836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:109696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.190 [2024-11-25 12:29:50.894841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.190 [2024-11-25 12:29:50.894848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:109704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.190 [2024-11-25 12:29:50.894854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.190 [2024-11-25 12:29:50.894861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:109712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.190 [2024-11-25 12:29:50.894866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.190 [2024-11-25 12:29:50.894873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:109720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.190 [2024-11-25 12:29:50.894879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.190 [2024-11-25 12:29:50.894886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:109728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.190 [2024-11-25 12:29:50.894891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.190 [2024-11-25 12:29:50.894898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:109736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:55.190 [2024-11-25 12:29:50.894904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.190 [2024-11-25 12:29:50.894923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.190 [2024-11-25 12:29:50.894929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109744 len:8 PRP1 0x0 PRP2 0x0 00:30:55.190 [2024-11-25 12:29:50.894934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.190 [2024-11-25 12:29:50.894942] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.190 [2024-11-25 12:29:50.894947] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.190 [2024-11-25 12:29:50.894952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109752 len:8 PRP1 0x0 PRP2 0x0 00:30:55.190 [2024-11-25 12:29:50.894957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.190 [2024-11-25 12:29:50.894963] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.190 [2024-11-25 12:29:50.894967] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.190 [2024-11-25 12:29:50.894971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109760 len:8 PRP1 0x0 PRP2 0x0 00:30:55.191 [2024-11-25 12:29:50.894977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.191 [2024-11-25 12:29:50.894983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.191 [2024-11-25 12:29:50.894987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.191 [2024-11-25 12:29:50.894992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109768 len:8 PRP1 0x0 PRP2 0x0 00:30:55.191 [2024-11-25 12:29:50.894997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.191 [2024-11-25 12:29:50.895004] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.191 [2024-11-25 12:29:50.895009] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.191 [2024-11-25 12:29:50.895015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109776 len:8 PRP1 0x0 PRP2 0x0 00:30:55.191 [2024-11-25 12:29:50.895020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.191 [2024-11-25 12:29:50.895026] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.191 [2024-11-25 12:29:50.895030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.191 [2024-11-25 12:29:50.895035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109784 len:8 PRP1 0x0 PRP2 0x0 00:30:55.191 [2024-11-25 12:29:50.895040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.191 [2024-11-25 12:29:50.895046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.191 [2024-11-25 12:29:50.895050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.191 [2024-11-25 12:29:50.895055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109792 len:8 PRP1 0x0 PRP2 0x0 00:30:55.191 [2024-11-25 12:29:50.895060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.191 [2024-11-25 12:29:50.895066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:55.191 [2024-11-25 12:29:50.895070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:55.191 [2024-11-25 12:29:50.895074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109800 len:8 PRP1 0x0 PRP2 0x0 00:30:55.191 [2024-11-25 12:29:50.895080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:55.191 [2024-11-25 12:29:50.895341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:30:55.191 [2024-11-25 12:29:50.895402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc6f50 (9): Bad file descriptor 00:30:55.191 [2024-11-25 12:29:50.895469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.191 [2024-11-25 12:29:50.895481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc6f50 with addr=10.0.0.3, port=4420 00:30:55.191 [2024-11-25 12:29:50.895487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc6f50 is same with the state(6) to be set 00:30:55.191 [2024-11-25 12:29:50.895497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc6f50 (9): Bad file descriptor 00:30:55.191 [2024-11-25 12:29:50.895506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:30:55.191 [2024-11-25 12:29:50.895512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:30:55.191 [2024-11-25 12:29:50.895518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:30:55.191 [2024-11-25 12:29:50.895524] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:30:55.191 [2024-11-25 12:29:50.895530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:30:55.191 12:29:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:30:57.055 6799.00 IOPS, 26.56 MiB/s [2024-11-25T12:29:52.931Z] 4532.67 IOPS, 17.71 MiB/s [2024-11-25T12:29:52.931Z] [2024-11-25 12:29:52.895745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:57.055 [2024-11-25 12:29:52.895793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc6f50 with addr=10.0.0.3, port=4420 00:30:57.055 [2024-11-25 12:29:52.895799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc6f50 is same with the state(6) to be set 00:30:57.055 [2024-11-25 12:29:52.895813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc6f50 (9): Bad file descriptor 00:30:57.055 [2024-11-25 12:29:52.895828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:30:57.055 [2024-11-25 12:29:52.895833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:30:57.055 [2024-11-25 12:29:52.895838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:30:57.055 [2024-11-25 12:29:52.895845] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:30:57.055 [2024-11-25 12:29:52.895850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:30:57.055 12:29:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:30:57.055 12:29:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:57.055 12:29:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:30:57.313 12:29:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:30:57.313 12:29:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:30:57.313 12:29:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:30:57.313 12:29:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:30:57.571 12:29:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:30:57.571 12:29:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:30:59.068 3399.50 IOPS, 13.28 MiB/s [2024-11-25T12:29:54.944Z] 2719.60 IOPS, 10.62 MiB/s [2024-11-25T12:29:54.944Z] [2024-11-25 12:29:54.896069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:59.068 [2024-11-25 12:29:54.896108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc6f50 with addr=10.0.0.3, port=4420 00:30:59.068 [2024-11-25 12:29:54.896116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc6f50 is same with the state(6) to be set 00:30:59.068 [2024-11-25 12:29:54.896130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc6f50 (9): Bad file descriptor 00:30:59.068 [2024-11-25 12:29:54.896141] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:30:59.068 [2024-11-25 12:29:54.896146] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:30:59.068 [2024-11-25 12:29:54.896152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:30:59.068 [2024-11-25 12:29:54.896159] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:30:59.068 [2024-11-25 12:29:54.896165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:31:00.933 2266.33 IOPS, 8.85 MiB/s [2024-11-25T12:29:57.065Z] 1942.57 IOPS, 7.59 MiB/s [2024-11-25T12:29:57.065Z] [2024-11-25 12:29:56.896319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:31:01.189 [2024-11-25 12:29:56.896347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:31:01.189 [2024-11-25 12:29:56.896351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:31:01.189 [2024-11-25 12:29:56.896357] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:31:01.189 [2024-11-25 12:29:56.896362] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:31:02.152 1699.75 IOPS, 6.64 MiB/s 00:31:02.152 Latency(us) 00:31:02.152 [2024-11-25T12:29:58.028Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:02.152 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:02.152 Verification LBA range: start 0x0 length 0x4000 00:31:02.152 NVMe0n1 : 8.12 1674.70 6.54 15.76 0.00 75580.79 1890.46 7020619.62 00:31:02.152 [2024-11-25T12:29:58.028Z] =================================================================================================================== 00:31:02.152 [2024-11-25T12:29:58.028Z] Total : 1674.70 6.54 15.76 0.00 75580.79 1890.46 7020619.62 00:31:02.152 { 00:31:02.152 "results": [ 00:31:02.152 { 00:31:02.152 "job": "NVMe0n1", 00:31:02.152 "core_mask": "0x4", 00:31:02.152 "workload": "verify", 00:31:02.152 "status": "finished", 00:31:02.152 "verify_range": { 00:31:02.152 "start": 0, 00:31:02.152 "length": 16384 00:31:02.152 }, 00:31:02.152 "queue_depth": 128, 00:31:02.152 "io_size": 4096, 00:31:02.152 "runtime": 8.119646, 00:31:02.152 "iops": 1674.7035523469865, 00:31:02.152 "mibps": 6.541810751355416, 00:31:02.152 "io_failed": 128, 00:31:02.152 "io_timeout": 0, 00:31:02.152 "avg_latency_us": 75580.78728163283, 00:31:02.152 "min_latency_us": 1890.4615384615386, 00:31:02.152 "max_latency_us": 7020619.618461538 00:31:02.152 } 00:31:02.152 ], 00:31:02.152 "core_count": 1 00:31:02.152 } 00:31:02.717 12:29:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:31:02.718 12:29:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:31:02.718 12:29:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:02.718 12:29:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:31:02.718 12:29:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:31:02.718 12:29:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:31:02.718 12:29:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:31:02.975 12:29:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:31:02.975 12:29:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 96446 00:31:02.975 12:29:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 96399 00:31:02.975 12:29:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 96399 ']' 00:31:02.975 12:29:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 96399 00:31:02.975 12:29:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:31:02.975 12:29:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:02.975 12:29:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96399 00:31:02.975 12:29:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:31:02.975 12:29:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:31:02.975 killing process with pid 96399 00:31:02.975 12:29:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96399' 00:31:02.975 Received shutdown signal, test time was about 9.034184 seconds 00:31:02.975 00:31:02.975 Latency(us) 00:31:02.975 [2024-11-25T12:29:58.851Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:02.975 [2024-11-25T12:29:58.851Z] =================================================================================================================== 00:31:02.975 [2024-11-25T12:29:58.851Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:02.975 12:29:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 96399 00:31:02.976 12:29:58 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 96399 00:31:03.233 12:29:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:31:03.233 [2024-11-25 12:29:59.093576] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:31:03.490 12:29:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:31:03.490 12:29:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=96599 00:31:03.490 12:29:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 96599 /var/tmp/bdevperf.sock 00:31:03.490 12:29:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 96599 ']' 00:31:03.490 12:29:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:03.490 12:29:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:03.490 12:29:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:03.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:03.490 12:29:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:03.490 12:29:59 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:31:03.490 [2024-11-25 12:29:59.140218] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:31:03.490 [2024-11-25 12:29:59.140280] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96599 ] 00:31:03.490 [2024-11-25 12:29:59.274703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:03.490 [2024-11-25 12:29:59.304872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:04.423 12:30:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:04.423 12:30:00 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:31:04.423 12:30:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:31:04.423 12:30:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:31:04.680 NVMe0n1 00:31:04.680 12:30:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:04.680 12:30:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=96647 00:31:04.680 12:30:00 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:31:04.939 Running I/O for 10 seconds... 00:31:05.876 12:30:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:31:05.876 12383.00 IOPS, 48.37 MiB/s [2024-11-25T12:30:01.752Z] [2024-11-25 12:30:01.714717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:111216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.876 [2024-11-25 12:30:01.714766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.714781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:111224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.876 [2024-11-25 12:30:01.714786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.714795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:111232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.876 [2024-11-25 12:30:01.714800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.714806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:111240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.876 [2024-11-25 12:30:01.714811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.714818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:111248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.876 [2024-11-25 12:30:01.714823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.714829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:111256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.876 [2024-11-25 12:30:01.714833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.714839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:111264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.876 [2024-11-25 12:30:01.714844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.714850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:111272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.876 [2024-11-25 12:30:01.714855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.714860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:111280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.876 [2024-11-25 12:30:01.714865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.714871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:111288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.876 [2024-11-25 12:30:01.714876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.714882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:111296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.876 [2024-11-25 12:30:01.714886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.714892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:111304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.876 [2024-11-25 12:30:01.714896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.714902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:111312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.876 [2024-11-25 12:30:01.714907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.714912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:111320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.876 [2024-11-25 12:30:01.714917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.714923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:111328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.876 [2024-11-25 12:30:01.714928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.714934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:111336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.876 [2024-11-25 12:30:01.714938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.714944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:111344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.876 [2024-11-25 12:30:01.714949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.714957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:111352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.876 [2024-11-25 12:30:01.714961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.714967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:111360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.876 [2024-11-25 12:30:01.714972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.714978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:111368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.876 [2024-11-25 12:30:01.714982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.714988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:111376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.876 [2024-11-25 12:30:01.714993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.714999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:111384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.876 [2024-11-25 12:30:01.715003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.715010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:111392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.876 [2024-11-25 12:30:01.715014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.715020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:111400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.876 [2024-11-25 12:30:01.715024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.715031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:111408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.876 [2024-11-25 12:30:01.715035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.715041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:111096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.876 [2024-11-25 12:30:01.715046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.715052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:111104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.876 [2024-11-25 12:30:01.715056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.715063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:111112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.876 [2024-11-25 12:30:01.715067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.715073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.876 [2024-11-25 12:30:01.715077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.715084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:111128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.876 [2024-11-25 12:30:01.715088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.715094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.876 [2024-11-25 12:30:01.715098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.715104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:111144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.876 [2024-11-25 12:30:01.715109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.715115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:111152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.876 [2024-11-25 12:30:01.715119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.715126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:111160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.876 [2024-11-25 12:30:01.715131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.715137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:111168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.876 [2024-11-25 12:30:01.715141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.715147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:111176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.876 [2024-11-25 12:30:01.715152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.715158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:111184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.876 [2024-11-25 12:30:01.715162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.715168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:111192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:05.876 [2024-11-25 12:30:01.715172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.715178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.876 [2024-11-25 12:30:01.715183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.715188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:111424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.876 [2024-11-25 12:30:01.715193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.715198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:111432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.876 [2024-11-25 12:30:01.715203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.715209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:111440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.876 [2024-11-25 12:30:01.715213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.715219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.876 [2024-11-25 12:30:01.715223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.715229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:111456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.876 [2024-11-25 12:30:01.715233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.715239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:111464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.876 [2024-11-25 12:30:01.715243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.715255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:111472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.876 [2024-11-25 12:30:01.715259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.715266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:111480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.876 [2024-11-25 12:30:01.715270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.715276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:111488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.876 [2024-11-25 12:30:01.715280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.715286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:111496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.876 [2024-11-25 12:30:01.715291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.715297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:111504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.876 [2024-11-25 12:30:01.715302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.715308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:111512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.876 [2024-11-25 12:30:01.715312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.715318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:111520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.876 [2024-11-25 12:30:01.715322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.715328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:111528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.876 [2024-11-25 12:30:01.715333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.715338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:111536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.876 [2024-11-25 12:30:01.715343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.876 [2024-11-25 12:30:01.715349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:111544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.876 [2024-11-25 12:30:01.715353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.715359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:111552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.877 [2024-11-25 12:30:01.715363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.715369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:111560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.877 [2024-11-25 12:30:01.715374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.715380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:111568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.877 [2024-11-25 12:30:01.715384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.715390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:111576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.877 [2024-11-25 12:30:01.715394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.715400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:111584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.877 [2024-11-25 12:30:01.715404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.715410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:111592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.877 [2024-11-25 12:30:01.715414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.715420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:111600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.877 [2024-11-25 12:30:01.715424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.715430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:111608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.877 [2024-11-25 12:30:01.715434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.715440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:111616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.877 [2024-11-25 12:30:01.715444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.715450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:111624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.877 [2024-11-25 12:30:01.715455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.715461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:111632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.877 [2024-11-25 12:30:01.715466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.715472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:111640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.877 [2024-11-25 12:30:01.715477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.715484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:111648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.877 [2024-11-25 12:30:01.715488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.715494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:111656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.877 [2024-11-25 12:30:01.715499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.715505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:111664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.877 [2024-11-25 12:30:01.715509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.715515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:111672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.877 [2024-11-25 12:30:01.715519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.715525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:111680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.877 [2024-11-25 12:30:01.715529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.715535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:111688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.877 [2024-11-25 12:30:01.715539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.715545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:111696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.877 [2024-11-25 12:30:01.715550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.715555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:111704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.877 [2024-11-25 12:30:01.715560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.715566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:111712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.877 [2024-11-25 12:30:01.715570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.715576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:111720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.877 [2024-11-25 12:30:01.715580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.715586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:111728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.877 [2024-11-25 12:30:01.715590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.715596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:111736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.877 [2024-11-25 12:30:01.715600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.715606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:111744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.877 [2024-11-25 12:30:01.715611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.715617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:111752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.877 [2024-11-25 12:30:01.715625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.715631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:111760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.877 [2024-11-25 12:30:01.715636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.715642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:111768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.877 [2024-11-25 12:30:01.715646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.715653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:111776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.877 [2024-11-25 12:30:01.715657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.715663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:111784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.877 [2024-11-25 12:30:01.715667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.715673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:111792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:05.877 [2024-11-25 12:30:01.715677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.715695] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.877 [2024-11-25 12:30:01.715700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111800 len:8 PRP1 0x0 PRP2 0x0 00:31:05.877 [2024-11-25 12:30:01.715704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.715711] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.877 [2024-11-25 12:30:01.715715] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.877 [2024-11-25 12:30:01.715719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111808 len:8 PRP1 0x0 PRP2 0x0 00:31:05.877 [2024-11-25 12:30:01.715723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.715728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.877 [2024-11-25 12:30:01.715731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.877 [2024-11-25 12:30:01.715735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111816 len:8 PRP1 0x0 PRP2 0x0 00:31:05.877 [2024-11-25 12:30:01.715739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.715744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.877 [2024-11-25 12:30:01.715757] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.877 [2024-11-25 12:30:01.715761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111824 len:8 PRP1 0x0 PRP2 0x0 00:31:05.877 [2024-11-25 12:30:01.715766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.715770] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.877 [2024-11-25 12:30:01.715773] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.877 [2024-11-25 12:30:01.715777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111832 len:8 PRP1 0x0 PRP2 0x0 00:31:05.877 [2024-11-25 12:30:01.715782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.715786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.877 [2024-11-25 12:30:01.715789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.877 [2024-11-25 12:30:01.715795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111840 len:8 PRP1 0x0 PRP2 0x0 00:31:05.877 [2024-11-25 12:30:01.715800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.715804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.877 [2024-11-25 12:30:01.715808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.877 [2024-11-25 12:30:01.715812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111848 len:8 PRP1 0x0 PRP2 0x0 00:31:05.877 [2024-11-25 12:30:01.715823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.715827] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.877 [2024-11-25 12:30:01.715831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.877 [2024-11-25 12:30:01.715835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111856 len:8 PRP1 0x0 PRP2 0x0 00:31:05.877 [2024-11-25 12:30:01.715839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.715844] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.877 [2024-11-25 12:30:01.715847] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.877 [2024-11-25 12:30:01.715851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111864 len:8 PRP1 0x0 PRP2 0x0 00:31:05.877 [2024-11-25 12:30:01.715855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.715860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.877 [2024-11-25 12:30:01.715863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.877 [2024-11-25 12:30:01.715867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111872 len:8 PRP1 0x0 PRP2 0x0 00:31:05.877 [2024-11-25 12:30:01.715872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.715876] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.877 [2024-11-25 12:30:01.715880] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.877 [2024-11-25 12:30:01.715884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111880 len:8 PRP1 0x0 PRP2 0x0 00:31:05.877 [2024-11-25 12:30:01.715888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.715893] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.877 [2024-11-25 12:30:01.715896] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.877 [2024-11-25 12:30:01.715900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111888 len:8 PRP1 0x0 PRP2 0x0 00:31:05.877 [2024-11-25 12:30:01.715904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.715909] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.877 [2024-11-25 12:30:01.715912] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.877 [2024-11-25 12:30:01.715916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111896 len:8 PRP1 0x0 PRP2 0x0 00:31:05.877 [2024-11-25 12:30:01.715920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.715925] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.877 [2024-11-25 12:30:01.715928] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.877 [2024-11-25 12:30:01.715934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111904 len:8 PRP1 0x0 PRP2 0x0 00:31:05.877 [2024-11-25 12:30:01.715939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.715943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.877 [2024-11-25 12:30:01.715947] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.877 [2024-11-25 12:30:01.715950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111912 len:8 PRP1 0x0 PRP2 0x0 00:31:05.877 [2024-11-25 12:30:01.715956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.715961] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.877 [2024-11-25 12:30:01.715971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.877 [2024-11-25 12:30:01.715975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111920 len:8 PRP1 0x0 PRP2 0x0 00:31:05.877 [2024-11-25 12:30:01.715979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.715983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.877 [2024-11-25 12:30:01.715987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.877 [2024-11-25 12:30:01.715991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111928 len:8 PRP1 0x0 PRP2 0x0 00:31:05.877 [2024-11-25 12:30:01.715995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.716000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.877 [2024-11-25 12:30:01.716003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.877 [2024-11-25 12:30:01.716007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111936 len:8 PRP1 0x0 PRP2 0x0 00:31:05.877 [2024-11-25 12:30:01.716011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.716016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.877 [2024-11-25 12:30:01.716019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.877 [2024-11-25 12:30:01.716023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111944 len:8 PRP1 0x0 PRP2 0x0 00:31:05.877 [2024-11-25 12:30:01.716027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.716032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.877 [2024-11-25 12:30:01.716035] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.877 [2024-11-25 12:30:01.716039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111952 len:8 PRP1 0x0 PRP2 0x0 00:31:05.877 [2024-11-25 12:30:01.716043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.716048] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.877 [2024-11-25 12:30:01.716051] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.877 [2024-11-25 12:30:01.716055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111960 len:8 PRP1 0x0 PRP2 0x0 00:31:05.877 [2024-11-25 12:30:01.716059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.716063] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.877 [2024-11-25 12:30:01.716067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.877 [2024-11-25 12:30:01.716072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111968 len:8 PRP1 0x0 PRP2 0x0 00:31:05.877 [2024-11-25 12:30:01.716077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.716081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.877 [2024-11-25 12:30:01.716085] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.877 [2024-11-25 12:30:01.716089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111976 len:8 PRP1 0x0 PRP2 0x0 00:31:05.877 [2024-11-25 12:30:01.716095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.716100] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.877 [2024-11-25 12:30:01.716103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.877 [2024-11-25 12:30:01.716107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111984 len:8 PRP1 0x0 PRP2 0x0 00:31:05.877 [2024-11-25 12:30:01.716111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.716116] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.877 [2024-11-25 12:30:01.716119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.877 [2024-11-25 12:30:01.716123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111992 len:8 PRP1 0x0 PRP2 0x0 00:31:05.877 [2024-11-25 12:30:01.716127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.716132] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.877 [2024-11-25 12:30:01.716135] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.877 [2024-11-25 12:30:01.716139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112000 len:8 PRP1 0x0 PRP2 0x0 00:31:05.877 [2024-11-25 12:30:01.716143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.877 [2024-11-25 12:30:01.716148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.878 [2024-11-25 12:30:01.716151] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.878 [2024-11-25 12:30:01.716155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112008 len:8 PRP1 0x0 PRP2 0x0 00:31:05.878 [2024-11-25 12:30:01.716159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.878 [2024-11-25 12:30:01.716164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.878 [2024-11-25 12:30:01.716167] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.878 [2024-11-25 12:30:01.716171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112016 len:8 PRP1 0x0 PRP2 0x0 00:31:05.878 [2024-11-25 12:30:01.716175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.878 [2024-11-25 12:30:01.716179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.878 [2024-11-25 12:30:01.716183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.878 [2024-11-25 12:30:01.716186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112024 len:8 PRP1 0x0 PRP2 0x0 00:31:05.878 [2024-11-25 12:30:01.716191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.878 [2024-11-25 12:30:01.716197] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.878 [2024-11-25 12:30:01.716200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.878 [2024-11-25 12:30:01.716205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112032 len:8 PRP1 0x0 PRP2 0x0 00:31:05.878 [2024-11-25 12:30:01.716209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.878 [2024-11-25 12:30:01.716214] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.878 [2024-11-25 12:30:01.724895] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.878 [2024-11-25 12:30:01.724926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112040 len:8 PRP1 0x0 PRP2 0x0 00:31:05.878 [2024-11-25 12:30:01.724936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.878 [2024-11-25 12:30:01.724949] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.878 [2024-11-25 12:30:01.724953] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.878 [2024-11-25 12:30:01.724958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112048 len:8 PRP1 0x0 PRP2 0x0 00:31:05.878 [2024-11-25 12:30:01.724963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.878 [2024-11-25 12:30:01.724968] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.878 [2024-11-25 12:30:01.724971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.878 [2024-11-25 12:30:01.724975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112056 len:8 PRP1 0x0 PRP2 0x0 00:31:05.878 [2024-11-25 12:30:01.724979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.878 [2024-11-25 12:30:01.724984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.878 [2024-11-25 12:30:01.724987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.878 [2024-11-25 12:30:01.724991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112064 len:8 PRP1 0x0 PRP2 0x0 00:31:05.878 [2024-11-25 12:30:01.724996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.878 [2024-11-25 12:30:01.725000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.878 [2024-11-25 12:30:01.725003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.878 [2024-11-25 12:30:01.725007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112072 len:8 PRP1 0x0 PRP2 0x0 00:31:05.878 [2024-11-25 12:30:01.725011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.878 [2024-11-25 12:30:01.725016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.878 [2024-11-25 12:30:01.725019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.878 [2024-11-25 12:30:01.725023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112080 len:8 PRP1 0x0 PRP2 0x0 00:31:05.878 [2024-11-25 12:30:01.725027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.878 [2024-11-25 12:30:01.725031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.878 [2024-11-25 12:30:01.725035] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.878 [2024-11-25 12:30:01.725038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112088 len:8 PRP1 0x0 PRP2 0x0 00:31:05.878 [2024-11-25 12:30:01.725043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.878 [2024-11-25 12:30:01.725047] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.878 [2024-11-25 12:30:01.725050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.878 [2024-11-25 12:30:01.725055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112096 len:8 PRP1 0x0 PRP2 0x0 00:31:05.878 [2024-11-25 12:30:01.725059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.878 [2024-11-25 12:30:01.725063] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.878 [2024-11-25 12:30:01.725066] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.878 [2024-11-25 12:30:01.725070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112104 len:8 PRP1 0x0 PRP2 0x0 00:31:05.878 [2024-11-25 12:30:01.725074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.878 [2024-11-25 12:30:01.725080] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.878 [2024-11-25 12:30:01.725083] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.878 [2024-11-25 12:30:01.725087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112112 len:8 PRP1 0x0 PRP2 0x0 00:31:05.878 [2024-11-25 12:30:01.725091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.878 [2024-11-25 12:30:01.725096] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.878 [2024-11-25 12:30:01.725100] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.878 [2024-11-25 12:30:01.725104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111200 len:8 PRP1 0x0 PRP2 0x0 00:31:05.878 [2024-11-25 12:30:01.725108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.878 [2024-11-25 12:30:01.725112] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:05.878 [2024-11-25 12:30:01.725115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:05.878 [2024-11-25 12:30:01.725119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111208 len:8 PRP1 0x0 PRP2 0x0 00:31:05.878 [2024-11-25 12:30:01.725123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.878 [2024-11-25 12:30:01.725301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:05.878 [2024-11-25 12:30:01.725319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.878 [2024-11-25 12:30:01.725333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:05.878 [2024-11-25 12:30:01.725343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.878 [2024-11-25 12:30:01.725354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:05.878 [2024-11-25 12:30:01.725365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.878 [2024-11-25 12:30:01.725376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:05.878 [2024-11-25 12:30:01.725385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:05.878 [2024-11-25 12:30:01.725396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23caf50 is same with the state(6) to be set 00:31:05.878 [2024-11-25 12:30:01.725800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.878 [2024-11-25 12:30:01.725833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23caf50 (9): Bad file descriptor 00:31:05.878 [2024-11-25 12:30:01.725942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:05.878 [2024-11-25 12:30:01.725963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23caf50 with addr=10.0.0.3, port=4420 00:31:05.878 [2024-11-25 12:30:01.725975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23caf50 is same with the state(6) to be set 00:31:05.878 [2024-11-25 12:30:01.725995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23caf50 (9): Bad file descriptor 00:31:05.878 [2024-11-25 12:30:01.726014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:05.878 [2024-11-25 12:30:01.726023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:05.878 [2024-11-25 12:30:01.726033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:05.878 [2024-11-25 12:30:01.726045] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:05.878 [2024-11-25 12:30:01.726057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:05.878 12:30:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:31:07.068 6943.50 IOPS, 27.12 MiB/s [2024-11-25T12:30:02.944Z] [2024-11-25 12:30:02.726170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:07.068 [2024-11-25 12:30:02.726207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23caf50 with addr=10.0.0.3, port=4420 00:31:07.068 [2024-11-25 12:30:02.726213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23caf50 is same with the state(6) to be set 00:31:07.068 [2024-11-25 12:30:02.726224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23caf50 (9): Bad file descriptor 00:31:07.068 [2024-11-25 12:30:02.726234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:31:07.068 [2024-11-25 12:30:02.726238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:31:07.068 [2024-11-25 12:30:02.726244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:31:07.068 [2024-11-25 12:30:02.726250] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:31:07.068 [2024-11-25 12:30:02.726255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:31:07.068 12:30:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:31:07.068 [2024-11-25 12:30:02.926617] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:31:07.326 12:30:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 96647 00:31:07.892 4629.00 IOPS, 18.08 MiB/s [2024-11-25T12:30:03.768Z] [2024-11-25 12:30:03.742047] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:31:09.760 3471.75 IOPS, 13.56 MiB/s [2024-11-25T12:30:07.007Z] 4984.00 IOPS, 19.47 MiB/s [2024-11-25T12:30:07.941Z] 6256.83 IOPS, 24.44 MiB/s [2024-11-25T12:30:08.874Z] 7139.29 IOPS, 27.89 MiB/s [2024-11-25T12:30:09.805Z] 7833.38 IOPS, 30.60 MiB/s [2024-11-25T12:30:10.738Z] 8370.33 IOPS, 32.70 MiB/s [2024-11-25T12:30:10.738Z] 8795.70 IOPS, 34.36 MiB/s 00:31:14.862 Latency(us) 00:31:14.862 [2024-11-25T12:30:10.738Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:14.862 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:14.862 Verification LBA range: start 0x0 length 0x4000 00:31:14.862 NVMe0n1 : 10.01 8800.05 34.38 0.00 0.00 14518.38 1436.75 3019898.88 00:31:14.862 [2024-11-25T12:30:10.738Z] =================================================================================================================== 00:31:14.862 [2024-11-25T12:30:10.738Z] Total : 8800.05 34.38 0.00 0.00 14518.38 1436.75 3019898.88 00:31:14.862 { 00:31:14.862 "results": [ 00:31:14.862 { 00:31:14.862 "job": "NVMe0n1", 00:31:14.862 "core_mask": "0x4", 00:31:14.862 "workload": "verify", 00:31:14.862 "status": "finished", 00:31:14.862 "verify_range": { 00:31:14.862 "start": 0, 00:31:14.862 "length": 16384 00:31:14.862 }, 00:31:14.862 "queue_depth": 128, 00:31:14.862 "io_size": 4096, 00:31:14.862 "runtime": 10.009606, 00:31:14.862 "iops": 8800.046675163838, 00:31:14.862 "mibps": 34.37518232485874, 00:31:14.862 "io_failed": 0, 00:31:14.862 "io_timeout": 0, 00:31:14.862 "avg_latency_us": 14518.377583051335, 00:31:14.862 "min_latency_us": 1436.7507692307693, 00:31:14.862 "max_latency_us": 3019898.88 00:31:14.862 } 00:31:14.862 ], 00:31:14.862 "core_count": 1 00:31:14.862 } 00:31:14.862 12:30:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=96769 00:31:14.862 12:30:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:14.862 12:30:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:31:14.862 Running I/O for 10 seconds... 00:31:15.794 12:30:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:31:16.094 12763.00 IOPS, 49.86 MiB/s [2024-11-25T12:30:11.970Z] [2024-11-25 12:30:11.822132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:114280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.094 [2024-11-25 12:30:11.822170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.094 [2024-11-25 12:30:11.822181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:114288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.094 [2024-11-25 12:30:11.822187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.094 [2024-11-25 12:30:11.822194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:114296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.094 [2024-11-25 12:30:11.822199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.094 [2024-11-25 12:30:11.822205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:114304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.094 [2024-11-25 12:30:11.822209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.094 [2024-11-25 12:30:11.822215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:114312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.094 [2024-11-25 12:30:11.822219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.094 [2024-11-25 12:30:11.822225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.094 [2024-11-25 12:30:11.822230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.094 [2024-11-25 12:30:11.822235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:114328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.094 [2024-11-25 12:30:11.822240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.094 [2024-11-25 12:30:11.822245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:114336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.094 [2024-11-25 12:30:11.822250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.094 [2024-11-25 12:30:11.822255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:114344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.094 [2024-11-25 12:30:11.822260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.094 [2024-11-25 12:30:11.822266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:114352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.094 [2024-11-25 12:30:11.822270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.094 [2024-11-25 12:30:11.822276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:114360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.094 [2024-11-25 12:30:11.822280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.094 [2024-11-25 12:30:11.822286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:114368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.094 [2024-11-25 12:30:11.822290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.094 [2024-11-25 12:30:11.822296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:114376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.094 [2024-11-25 12:30:11.822300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.094 [2024-11-25 12:30:11.822306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:114384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.094 [2024-11-25 12:30:11.822310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.094 [2024-11-25 12:30:11.822316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:114392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.094 [2024-11-25 12:30:11.822321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.094 [2024-11-25 12:30:11.822326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:114400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.094 [2024-11-25 12:30:11.822331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.094 [2024-11-25 12:30:11.822336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:114408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.094 [2024-11-25 12:30:11.822340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.094 [2024-11-25 12:30:11.822347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:114416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.094 [2024-11-25 12:30:11.822351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.094 [2024-11-25 12:30:11.822357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:114424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.094 [2024-11-25 12:30:11.822362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.094 [2024-11-25 12:30:11.822368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.094 [2024-11-25 12:30:11.822372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.094 [2024-11-25 12:30:11.822378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:114440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.094 [2024-11-25 12:30:11.822382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.095 [2024-11-25 12:30:11.822389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:114448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.095 [2024-11-25 12:30:11.822394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.095 [2024-11-25 12:30:11.822400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:114456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.095 [2024-11-25 12:30:11.822405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.095 [2024-11-25 12:30:11.822411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:114464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.095 [2024-11-25 12:30:11.822415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.095 [2024-11-25 12:30:11.822421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:114472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.095 [2024-11-25 12:30:11.822425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.095 [2024-11-25 12:30:11.822431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:114480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.095 [2024-11-25 12:30:11.822436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.095 [2024-11-25 12:30:11.822442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:114488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.095 [2024-11-25 12:30:11.822446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.095 [2024-11-25 12:30:11.822452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:114496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.095 [2024-11-25 12:30:11.822456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.095 [2024-11-25 12:30:11.822461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:114504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.095 [2024-11-25 12:30:11.822465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.095 [2024-11-25 12:30:11.822471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:114512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.095 [2024-11-25 12:30:11.822476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.095 [2024-11-25 12:30:11.822482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:114520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.095 [2024-11-25 12:30:11.822486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.095 [2024-11-25 12:30:11.822491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:114528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.095 [2024-11-25 12:30:11.822496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.095 [2024-11-25 12:30:11.822501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:114536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.095 [2024-11-25 12:30:11.822506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.095 [2024-11-25 12:30:11.822513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:114544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.095 [2024-11-25 12:30:11.822517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.095 [2024-11-25 12:30:11.822523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:114552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.095 [2024-11-25 12:30:11.822527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.095 [2024-11-25 12:30:11.822533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:114560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.095 [2024-11-25 12:30:11.822537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.095 [2024-11-25 12:30:11.822542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:114568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.095 [2024-11-25 12:30:11.822547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.095 [2024-11-25 12:30:11.822552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:114576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.095 [2024-11-25 12:30:11.822556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.095 [2024-11-25 12:30:11.822562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:114584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.095 [2024-11-25 12:30:11.822566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.095 [2024-11-25 12:30:11.822572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:113768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.095 [2024-11-25 12:30:11.822576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.095 [2024-11-25 12:30:11.822582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:113776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.095 [2024-11-25 12:30:11.822587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.095 [2024-11-25 12:30:11.822592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:113784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.095 [2024-11-25 12:30:11.822597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.095 [2024-11-25 12:30:11.822602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:113792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.095 [2024-11-25 12:30:11.822607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.095 [2024-11-25 12:30:11.822613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:113800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.095 [2024-11-25 12:30:11.822618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.095 [2024-11-25 12:30:11.822624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:113808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.095 [2024-11-25 12:30:11.822628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.095 [2024-11-25 12:30:11.822634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:113816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.095 [2024-11-25 12:30:11.822638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.095 [2024-11-25 12:30:11.822644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.095 [2024-11-25 12:30:11.822648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.095 [2024-11-25 12:30:11.822653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:113832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.095 [2024-11-25 12:30:11.822658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.095 [2024-11-25 12:30:11.822663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:113840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.095 [2024-11-25 12:30:11.822667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.095 [2024-11-25 12:30:11.822675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:113848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.095 [2024-11-25 12:30:11.822680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.095 [2024-11-25 12:30:11.822685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:113856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.095 [2024-11-25 12:30:11.822689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.095 [2024-11-25 12:30:11.822695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.095 [2024-11-25 12:30:11.822700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.095 [2024-11-25 12:30:11.822705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:113872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.095 [2024-11-25 12:30:11.822710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.095 [2024-11-25 12:30:11.822715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:113880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.095 [2024-11-25 12:30:11.822719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.095 [2024-11-25 12:30:11.822725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:113888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.095 [2024-11-25 12:30:11.822729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.095 [2024-11-25 12:30:11.822735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:113896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.095 [2024-11-25 12:30:11.822739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.095 [2024-11-25 12:30:11.822745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:113904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.095 [2024-11-25 12:30:11.822759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.095 [2024-11-25 12:30:11.822765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:113912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.095 [2024-11-25 12:30:11.822770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.095 [2024-11-25 12:30:11.822775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:113920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.095 [2024-11-25 12:30:11.822780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.095 [2024-11-25 12:30:11.822786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:113928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.095 [2024-11-25 12:30:11.822790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.096 [2024-11-25 12:30:11.822796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:113936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.096 [2024-11-25 12:30:11.822800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.096 [2024-11-25 12:30:11.822806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:113944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.096 [2024-11-25 12:30:11.822810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.096 [2024-11-25 12:30:11.822816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:113952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.096 [2024-11-25 12:30:11.822820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.096 [2024-11-25 12:30:11.822826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:113960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.096 [2024-11-25 12:30:11.822831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.096 [2024-11-25 12:30:11.822837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:113968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.096 [2024-11-25 12:30:11.822841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.096 [2024-11-25 12:30:11.822847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.096 [2024-11-25 12:30:11.822852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.096 [2024-11-25 12:30:11.822858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:113984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.096 [2024-11-25 12:30:11.822862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.096 [2024-11-25 12:30:11.822868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:113992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.096 [2024-11-25 12:30:11.822872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.096 [2024-11-25 12:30:11.822878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:114000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.096 [2024-11-25 12:30:11.822882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.096 [2024-11-25 12:30:11.822888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:114008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.096 [2024-11-25 12:30:11.822892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.096 [2024-11-25 12:30:11.822899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:114016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.096 [2024-11-25 12:30:11.822903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.096 [2024-11-25 12:30:11.822909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:114024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.096 [2024-11-25 12:30:11.822913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.096 [2024-11-25 12:30:11.822919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:114032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.096 [2024-11-25 12:30:11.822923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.096 [2024-11-25 12:30:11.822929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:114040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.096 [2024-11-25 12:30:11.822933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.096 [2024-11-25 12:30:11.822939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:114048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.096 [2024-11-25 12:30:11.822944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.096 [2024-11-25 12:30:11.822950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:114056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.096 [2024-11-25 12:30:11.822954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.096 [2024-11-25 12:30:11.822959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:114064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.096 [2024-11-25 12:30:11.822963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.096 [2024-11-25 12:30:11.822969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:114072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.096 [2024-11-25 12:30:11.822973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.096 [2024-11-25 12:30:11.822979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:114080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.096 [2024-11-25 12:30:11.822983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.096 [2024-11-25 12:30:11.822989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:114088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.096 [2024-11-25 12:30:11.822993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.096 [2024-11-25 12:30:11.822999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:114096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.096 [2024-11-25 12:30:11.823004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.096 [2024-11-25 12:30:11.823010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:114104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.096 [2024-11-25 12:30:11.823014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.096 [2024-11-25 12:30:11.823020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:114112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.096 [2024-11-25 12:30:11.823024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.096 [2024-11-25 12:30:11.823030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:114120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.096 [2024-11-25 12:30:11.823034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.096 [2024-11-25 12:30:11.823040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:114128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.096 [2024-11-25 12:30:11.823044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.096 [2024-11-25 12:30:11.823050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:114136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.096 [2024-11-25 12:30:11.823054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.096 [2024-11-25 12:30:11.823060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:114144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.096 [2024-11-25 12:30:11.823064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.096 [2024-11-25 12:30:11.823070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:114152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.096 [2024-11-25 12:30:11.823075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.096 [2024-11-25 12:30:11.823080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:114160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.096 [2024-11-25 12:30:11.823085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.096 [2024-11-25 12:30:11.823090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:114168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.096 [2024-11-25 12:30:11.823095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.096 [2024-11-25 12:30:11.823101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:114176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.096 [2024-11-25 12:30:11.823105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.096 [2024-11-25 12:30:11.823111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:114184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.096 [2024-11-25 12:30:11.823115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.096 [2024-11-25 12:30:11.823121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:114192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.096 [2024-11-25 12:30:11.823125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.096 [2024-11-25 12:30:11.823131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:114200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.096 [2024-11-25 12:30:11.823136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.096 [2024-11-25 12:30:11.823142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:114208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.096 [2024-11-25 12:30:11.823147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.096 [2024-11-25 12:30:11.823153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:114592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.096 [2024-11-25 12:30:11.823157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.096 [2024-11-25 12:30:11.823162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:114600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.096 [2024-11-25 12:30:11.823167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.096 [2024-11-25 12:30:11.823173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:114608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.096 [2024-11-25 12:30:11.823177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.096 [2024-11-25 12:30:11.823183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:114616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.096 [2024-11-25 12:30:11.823187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.096 [2024-11-25 12:30:11.823192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:114624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.096 [2024-11-25 12:30:11.823196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.097 [2024-11-25 12:30:11.823202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:114632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.097 [2024-11-25 12:30:11.823207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.097 [2024-11-25 12:30:11.823212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:114640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.097 [2024-11-25 12:30:11.823216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.097 [2024-11-25 12:30:11.823222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.097 [2024-11-25 12:30:11.823227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.097 [2024-11-25 12:30:11.823233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:114656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.097 [2024-11-25 12:30:11.823237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.097 [2024-11-25 12:30:11.823243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:114664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.097 [2024-11-25 12:30:11.823247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.097 [2024-11-25 12:30:11.823253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:114672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.097 [2024-11-25 12:30:11.823257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.097 [2024-11-25 12:30:11.823263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:114680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.097 [2024-11-25 12:30:11.823267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.097 [2024-11-25 12:30:11.823273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:114688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.097 [2024-11-25 12:30:11.823277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.097 [2024-11-25 12:30:11.823282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:114696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.097 [2024-11-25 12:30:11.823287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.097 [2024-11-25 12:30:11.823293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:114704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.097 [2024-11-25 12:30:11.823303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.097 [2024-11-25 12:30:11.823308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.097 [2024-11-25 12:30:11.823313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.097 [2024-11-25 12:30:11.823318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:114720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.097 [2024-11-25 12:30:11.823323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.097 [2024-11-25 12:30:11.823328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:114728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.097 [2024-11-25 12:30:11.823336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.097 [2024-11-25 12:30:11.823342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:114736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.097 [2024-11-25 12:30:11.823346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.097 [2024-11-25 12:30:11.823352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:114744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.097 [2024-11-25 12:30:11.823356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.097 [2024-11-25 12:30:11.823362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:114752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.097 [2024-11-25 12:30:11.823367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.097 [2024-11-25 12:30:11.823372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:114760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.097 [2024-11-25 12:30:11.823377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.097 [2024-11-25 12:30:11.823382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:114768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.097 [2024-11-25 12:30:11.823387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.097 [2024-11-25 12:30:11.823392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:114776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.097 [2024-11-25 12:30:11.823397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.097 [2024-11-25 12:30:11.823402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:114784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:16.097 [2024-11-25 12:30:11.823407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.097 [2024-11-25 12:30:11.823412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:114216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.097 [2024-11-25 12:30:11.823417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.097 [2024-11-25 12:30:11.823422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:114224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.097 [2024-11-25 12:30:11.823427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.097 [2024-11-25 12:30:11.823433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:114232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.097 [2024-11-25 12:30:11.823437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.097 [2024-11-25 12:30:11.823443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:114240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.097 [2024-11-25 12:30:11.823447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.097 [2024-11-25 12:30:11.823453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:114248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.097 [2024-11-25 12:30:11.823457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.097 [2024-11-25 12:30:11.823463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:114256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.097 [2024-11-25 12:30:11.823468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.097 [2024-11-25 12:30:11.823475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:114264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:16.097 [2024-11-25 12:30:11.823479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.097 [2024-11-25 12:30:11.823498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:16.097 [2024-11-25 12:30:11.823502] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:16.097 [2024-11-25 12:30:11.823506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114272 len:8 PRP1 0x0 PRP2 0x0 00:31:16.097 [2024-11-25 12:30:11.823512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:16.097 [2024-11-25 12:30:11.823714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:31:16.097 [2024-11-25 12:30:11.823771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23caf50 (9): Bad file descriptor 00:31:16.097 [2024-11-25 12:30:11.823828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:16.097 [2024-11-25 12:30:11.823836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23caf50 with addr=10.0.0.3, port=4420 00:31:16.097 [2024-11-25 12:30:11.823841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23caf50 is same with the state(6) to be set 00:31:16.097 [2024-11-25 12:30:11.823850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23caf50 (9): Bad file descriptor 00:31:16.097 [2024-11-25 12:30:11.823858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:31:16.097 [2024-11-25 12:30:11.823862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:31:16.097 [2024-11-25 12:30:11.823868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:31:16.097 [2024-11-25 12:30:11.823873] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:31:16.097 [2024-11-25 12:30:11.823878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:31:16.097 12:30:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:31:17.064 7110.50 IOPS, 27.78 MiB/s [2024-11-25T12:30:12.940Z] [2024-11-25 12:30:12.823952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.064 [2024-11-25 12:30:12.823987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23caf50 with addr=10.0.0.3, port=4420 00:31:17.064 [2024-11-25 12:30:12.823995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23caf50 is same with the state(6) to be set 00:31:17.064 [2024-11-25 12:30:12.824006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23caf50 (9): Bad file descriptor 00:31:17.064 [2024-11-25 12:30:12.824014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:31:17.064 [2024-11-25 12:30:12.824018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:31:17.064 [2024-11-25 12:30:12.824024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:31:17.064 [2024-11-25 12:30:12.824030] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:31:17.064 [2024-11-25 12:30:12.824035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:31:17.996 4740.33 IOPS, 18.52 MiB/s [2024-11-25T12:30:13.873Z] [2024-11-25 12:30:13.824106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.997 [2024-11-25 12:30:13.824140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23caf50 with addr=10.0.0.3, port=4420 00:31:17.997 [2024-11-25 12:30:13.824147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23caf50 is same with the state(6) to be set 00:31:17.997 [2024-11-25 12:30:13.824158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23caf50 (9): Bad file descriptor 00:31:17.997 [2024-11-25 12:30:13.824167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:31:17.997 [2024-11-25 12:30:13.824172] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:31:17.997 [2024-11-25 12:30:13.824178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:31:17.997 [2024-11-25 12:30:13.824183] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:31:17.997 [2024-11-25 12:30:13.824188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:31:19.187 3555.25 IOPS, 13.89 MiB/s [2024-11-25T12:30:15.063Z] [2024-11-25 12:30:14.827268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.187 [2024-11-25 12:30:14.827302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23caf50 with addr=10.0.0.3, port=4420 00:31:19.187 [2024-11-25 12:30:14.827309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23caf50 is same with the state(6) to be set 00:31:19.187 [2024-11-25 12:30:14.827470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23caf50 (9): Bad file descriptor 00:31:19.187 [2024-11-25 12:30:14.827630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:31:19.187 [2024-11-25 12:30:14.827635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:31:19.187 [2024-11-25 12:30:14.827641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:31:19.187 [2024-11-25 12:30:14.827646] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:31:19.187 [2024-11-25 12:30:14.827651] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:31:19.187 12:30:14 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:31:19.187 [2024-11-25 12:30:15.035001] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:31:19.187 12:30:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 96769 00:31:20.119 2844.20 IOPS, 11.11 MiB/s [2024-11-25T12:30:15.995Z] [2024-11-25 12:30:15.851015] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:31:21.983 4195.33 IOPS, 16.39 MiB/s [2024-11-25T12:30:18.793Z] 5457.57 IOPS, 21.32 MiB/s [2024-11-25T12:30:19.724Z] 6422.75 IOPS, 25.09 MiB/s [2024-11-25T12:30:21.095Z] 7157.44 IOPS, 27.96 MiB/s [2024-11-25T12:30:21.095Z] 7734.30 IOPS, 30.21 MiB/s 00:31:25.219 Latency(us) 00:31:25.219 [2024-11-25T12:30:21.095Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:25.219 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:25.219 Verification LBA range: start 0x0 length 0x4000 00:31:25.219 NVMe0n1 : 10.01 7742.96 30.25 5440.89 0.00 9684.52 598.65 3006993.33 00:31:25.219 [2024-11-25T12:30:21.095Z] =================================================================================================================== 00:31:25.219 [2024-11-25T12:30:21.095Z] Total : 7742.96 30.25 5440.89 0.00 9684.52 0.00 3006993.33 00:31:25.219 { 00:31:25.219 "results": [ 00:31:25.219 { 00:31:25.219 "job": "NVMe0n1", 00:31:25.219 "core_mask": "0x4", 00:31:25.219 "workload": "verify", 00:31:25.219 "status": "finished", 00:31:25.219 "verify_range": { 00:31:25.219 "start": 0, 00:31:25.219 "length": 16384 00:31:25.219 }, 00:31:25.219 "queue_depth": 128, 00:31:25.219 "io_size": 4096, 00:31:25.219 "runtime": 10.005345, 00:31:25.219 "iops": 7742.9613871385745, 00:31:25.219 "mibps": 30.245942918510057, 00:31:25.219 "io_failed": 54438, 00:31:25.219 "io_timeout": 0, 00:31:25.219 "avg_latency_us": 9684.517168001017, 00:31:25.219 "min_latency_us": 598.6461538461539, 00:31:25.219 "max_latency_us": 3006993.329230769 00:31:25.219 } 00:31:25.219 ], 00:31:25.219 "core_count": 1 00:31:25.219 } 00:31:25.219 12:30:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 96599 00:31:25.219 12:30:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 96599 ']' 00:31:25.219 12:30:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 96599 00:31:25.219 12:30:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:31:25.219 12:30:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:25.219 12:30:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96599 00:31:25.219 killing process with pid 96599 00:31:25.219 Received shutdown signal, test time was about 10.000000 seconds 00:31:25.219 00:31:25.219 Latency(us) 00:31:25.219 [2024-11-25T12:30:21.095Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:25.219 [2024-11-25T12:30:21.095Z] =================================================================================================================== 00:31:25.219 [2024-11-25T12:30:21.095Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:25.219 12:30:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:31:25.219 12:30:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:31:25.219 12:30:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96599' 00:31:25.219 12:30:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 96599 00:31:25.219 12:30:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 96599 00:31:25.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:25.219 12:30:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=96896 00:31:25.219 12:30:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 96896 /var/tmp/bdevperf.sock 00:31:25.219 12:30:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:31:25.219 12:30:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 96896 ']' 00:31:25.219 12:30:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:25.219 12:30:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:25.219 12:30:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:25.219 12:30:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:25.219 12:30:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:31:25.219 [2024-11-25 12:30:20.897545] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:31:25.219 [2024-11-25 12:30:20.897604] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96896 ] 00:31:25.219 [2024-11-25 12:30:21.033682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:25.219 [2024-11-25 12:30:21.064113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:26.151 12:30:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:26.151 12:30:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:31:26.151 12:30:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=96924 00:31:26.151 12:30:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:31:26.151 12:30:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96896 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:31:26.151 12:30:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:31:26.408 NVMe0n1 00:31:26.408 12:30:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=96973 00:31:26.408 12:30:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:26.408 12:30:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:31:26.665 Running I/O for 10 seconds... 00:31:27.603 12:30:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:31:27.603 24814.00 IOPS, 96.93 MiB/s [2024-11-25T12:30:23.479Z] [2024-11-25 12:30:23.456191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd15800 is same with the state(6) to be set 00:31:27.603 [2024-11-25 12:30:23.456239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd15800 is same with the state(6) to be set 00:31:27.603 [2024-11-25 12:30:23.456245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd15800 is same with the state(6) to be set 00:31:27.603 [2024-11-25 12:30:23.456249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd15800 is same with the state(6) to be set 00:31:27.603 [2024-11-25 12:30:23.456254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd15800 is same with the state(6) to be set 00:31:27.603 [2024-11-25 12:30:23.456258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd15800 is same with the state(6) to be set 00:31:27.603 [2024-11-25 12:30:23.456262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd15800 is same with the state(6) to be set 00:31:27.603 [2024-11-25 12:30:23.456267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd15800 is same with the state(6) to be set 00:31:27.603 [2024-11-25 12:30:23.456271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd15800 is same with the state(6) to be set 00:31:27.603 [2024-11-25 12:30:23.456275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd15800 is same with the state(6) to be set 00:31:27.603 [2024-11-25 12:30:23.456279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd15800 is same with the state(6) to be set 00:31:27.603 [2024-11-25 12:30:23.456283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd15800 is same with the state(6) to be set 00:31:27.603 [2024-11-25 12:30:23.456287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd15800 is same with the state(6) to be set 00:31:27.603 [2024-11-25 12:30:23.456291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd15800 is same with the state(6) to be set 00:31:27.603 [2024-11-25 12:30:23.456294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd15800 is same with the state(6) to be set 00:31:27.603 [2024-11-25 12:30:23.456299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd15800 is same with the state(6) to be set 00:31:27.603 [2024-11-25 12:30:23.456302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd15800 is same with the state(6) to be set 00:31:27.603 [2024-11-25 12:30:23.456306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd15800 is same with the state(6) to be set 00:31:27.603 [2024-11-25 12:30:23.456311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd15800 is same with the state(6) to be set 00:31:27.603 [2024-11-25 12:30:23.456315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd15800 is same with the state(6) to be set 00:31:27.603 [2024-11-25 12:30:23.456318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd15800 is same with the state(6) to be set 00:31:27.603 [2024-11-25 12:30:23.456322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd15800 is same with the state(6) to be set 00:31:27.603 [2024-11-25 12:30:23.456326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd15800 is same with the state(6) to be set 00:31:27.603 [2024-11-25 12:30:23.456331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd15800 is same with the state(6) to be set 00:31:27.603 [2024-11-25 12:30:23.456795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:30640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.603 [2024-11-25 12:30:23.456825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.603 [2024-11-25 12:30:23.456839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:49496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.603 [2024-11-25 12:30:23.456844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.603 [2024-11-25 12:30:23.456851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:70400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.603 [2024-11-25 12:30:23.456855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.603 [2024-11-25 12:30:23.456862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:53608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.603 [2024-11-25 12:30:23.456866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.603 [2024-11-25 12:30:23.456872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:79912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.603 [2024-11-25 12:30:23.456877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.603 [2024-11-25 12:30:23.456883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:79816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.603 [2024-11-25 12:30:23.456888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.603 [2024-11-25 12:30:23.456893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:66040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.603 [2024-11-25 12:30:23.456898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.603 [2024-11-25 12:30:23.456904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:128152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.603 [2024-11-25 12:30:23.456908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.603 [2024-11-25 12:30:23.456914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:93272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.603 [2024-11-25 12:30:23.456919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.603 [2024-11-25 12:30:23.456925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:114944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.603 [2024-11-25 12:30:23.456929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.603 [2024-11-25 12:30:23.456935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.603 [2024-11-25 12:30:23.456939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.603 [2024-11-25 12:30:23.456945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:92672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.603 [2024-11-25 12:30:23.456949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.603 [2024-11-25 12:30:23.456955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:26056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.603 [2024-11-25 12:30:23.456959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.603 [2024-11-25 12:30:23.456965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.603 [2024-11-25 12:30:23.456969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.603 [2024-11-25 12:30:23.456975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:35792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.603 [2024-11-25 12:30:23.456979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.603 [2024-11-25 12:30:23.456985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:61256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.603 [2024-11-25 12:30:23.456990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.604 [2024-11-25 12:30:23.456995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:79192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.604 [2024-11-25 12:30:23.457002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.604 [2024-11-25 12:30:23.457008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:34440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.604 [2024-11-25 12:30:23.457013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.604 [2024-11-25 12:30:23.457019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.604 [2024-11-25 12:30:23.457023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.604 [2024-11-25 12:30:23.457029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.604 [2024-11-25 12:30:23.457033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.604 [2024-11-25 12:30:23.457039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.604 [2024-11-25 12:30:23.457043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.604 [2024-11-25 12:30:23.457049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.604 [2024-11-25 12:30:23.457054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.604 [2024-11-25 12:30:23.457059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.604 [2024-11-25 12:30:23.457064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.604 [2024-11-25 12:30:23.457069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:18720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.604 [2024-11-25 12:30:23.457074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.604 [2024-11-25 12:30:23.457080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:63544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.604 [2024-11-25 12:30:23.457084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.604 [2024-11-25 12:30:23.457090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:125616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.604 [2024-11-25 12:30:23.457094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.604 [2024-11-25 12:30:23.457100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:103576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.604 [2024-11-25 12:30:23.457104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.604 [2024-11-25 12:30:23.457110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:70248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.604 [2024-11-25 12:30:23.457115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.604 [2024-11-25 12:30:23.457121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:113400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.604 [2024-11-25 12:30:23.457125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.604 [2024-11-25 12:30:23.457131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.604 [2024-11-25 12:30:23.457136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.604 [2024-11-25 12:30:23.457142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:93920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.604 [2024-11-25 12:30:23.457147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.604 [2024-11-25 12:30:23.457153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:21704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.604 [2024-11-25 12:30:23.457157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.604 [2024-11-25 12:30:23.457163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:45744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.604 [2024-11-25 12:30:23.457168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.604 [2024-11-25 12:30:23.457174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.604 [2024-11-25 12:30:23.457179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.604 [2024-11-25 12:30:23.457185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:13920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.604 [2024-11-25 12:30:23.457189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.604 [2024-11-25 12:30:23.457195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:83816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.604 [2024-11-25 12:30:23.457200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.604 [2024-11-25 12:30:23.457205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:48592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.604 [2024-11-25 12:30:23.457210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.604 [2024-11-25 12:30:23.457215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:127448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.604 [2024-11-25 12:30:23.457220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.604 [2024-11-25 12:30:23.457225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:114520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.604 [2024-11-25 12:30:23.457230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.604 [2024-11-25 12:30:23.457236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:122072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.604 [2024-11-25 12:30:23.457240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.604 [2024-11-25 12:30:23.457246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:72272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.604 [2024-11-25 12:30:23.457250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.604 [2024-11-25 12:30:23.457255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.604 [2024-11-25 12:30:23.457260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.604 [2024-11-25 12:30:23.457266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.604 [2024-11-25 12:30:23.457270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.604 [2024-11-25 12:30:23.457277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.604 [2024-11-25 12:30:23.457281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.604 [2024-11-25 12:30:23.457287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:46496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.604 [2024-11-25 12:30:23.457291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.604 [2024-11-25 12:30:23.457297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.604 [2024-11-25 12:30:23.457302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.604 [2024-11-25 12:30:23.457307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.604 [2024-11-25 12:30:23.457312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.604 [2024-11-25 12:30:23.457318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:74728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.604 [2024-11-25 12:30:23.457322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.604 [2024-11-25 12:30:23.457328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:56504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.604 [2024-11-25 12:30:23.457333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.604 [2024-11-25 12:30:23.457339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:109736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.604 [2024-11-25 12:30:23.457344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.604 [2024-11-25 12:30:23.457350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:73096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.604 [2024-11-25 12:30:23.457354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.604 [2024-11-25 12:30:23.457360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:70712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.604 [2024-11-25 12:30:23.457364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.604 [2024-11-25 12:30:23.457370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:3080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.604 [2024-11-25 12:30:23.457375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.604 [2024-11-25 12:30:23.457380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:30496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.604 [2024-11-25 12:30:23.457385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.604 [2024-11-25 12:30:23.457391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:89176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.604 [2024-11-25 12:30:23.457395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.604 [2024-11-25 12:30:23.457401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:58528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.605 [2024-11-25 12:30:23.457405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.605 [2024-11-25 12:30:23.457411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:122768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.605 [2024-11-25 12:30:23.457416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.605 [2024-11-25 12:30:23.457421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:123208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.605 [2024-11-25 12:30:23.457426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.605 [2024-11-25 12:30:23.457432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:53800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.605 [2024-11-25 12:30:23.457436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.605 [2024-11-25 12:30:23.457442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:115008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.605 [2024-11-25 12:30:23.457446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.605 [2024-11-25 12:30:23.457452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:82608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.605 [2024-11-25 12:30:23.457456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.605 [2024-11-25 12:30:23.457462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:39672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.605 [2024-11-25 12:30:23.457466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.605 [2024-11-25 12:30:23.457472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:86048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.605 [2024-11-25 12:30:23.457476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.605 [2024-11-25 12:30:23.457482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:86440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.605 [2024-11-25 12:30:23.457486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.605 [2024-11-25 12:30:23.457492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:82024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.605 [2024-11-25 12:30:23.457497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.605 [2024-11-25 12:30:23.457503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:71576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.605 [2024-11-25 12:30:23.457508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.605 [2024-11-25 12:30:23.457514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:93952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.605 [2024-11-25 12:30:23.457518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.605 [2024-11-25 12:30:23.457524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.605 [2024-11-25 12:30:23.457528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.605 [2024-11-25 12:30:23.457534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:70456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.605 [2024-11-25 12:30:23.457538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.605 [2024-11-25 12:30:23.457544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:122336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.605 [2024-11-25 12:30:23.457549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.605 [2024-11-25 12:30:23.457555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:57072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.605 [2024-11-25 12:30:23.457559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.605 [2024-11-25 12:30:23.457565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.605 [2024-11-25 12:30:23.457569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.605 [2024-11-25 12:30:23.457575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:87288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.605 [2024-11-25 12:30:23.457580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.605 [2024-11-25 12:30:23.457586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:104496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.605 [2024-11-25 12:30:23.457590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.605 [2024-11-25 12:30:23.457596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:128928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.605 [2024-11-25 12:30:23.457600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.605 [2024-11-25 12:30:23.457606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:29112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.605 [2024-11-25 12:30:23.457611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.605 [2024-11-25 12:30:23.457617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.605 [2024-11-25 12:30:23.457621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.605 [2024-11-25 12:30:23.457627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:58464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.605 [2024-11-25 12:30:23.457631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.605 [2024-11-25 12:30:23.457637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:34904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.605 [2024-11-25 12:30:23.457641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.605 [2024-11-25 12:30:23.457647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:45824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.605 [2024-11-25 12:30:23.457651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.605 [2024-11-25 12:30:23.457657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:43728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.605 [2024-11-25 12:30:23.457662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.605 [2024-11-25 12:30:23.457668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.605 [2024-11-25 12:30:23.457673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.605 [2024-11-25 12:30:23.457679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:34752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.605 [2024-11-25 12:30:23.457684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.605 [2024-11-25 12:30:23.457690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:113264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.605 [2024-11-25 12:30:23.457694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.605 [2024-11-25 12:30:23.457700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.605 [2024-11-25 12:30:23.457704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.605 [2024-11-25 12:30:23.457710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:11040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.605 [2024-11-25 12:30:23.457714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.605 [2024-11-25 12:30:23.457720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:33800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.605 [2024-11-25 12:30:23.457724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.605 [2024-11-25 12:30:23.457730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.605 [2024-11-25 12:30:23.457734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.605 [2024-11-25 12:30:23.457740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:75664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.605 [2024-11-25 12:30:23.457745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.605 [2024-11-25 12:30:23.457758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:78552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.605 [2024-11-25 12:30:23.457763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.605 [2024-11-25 12:30:23.457768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:13032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.605 [2024-11-25 12:30:23.457773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.605 [2024-11-25 12:30:23.457778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:96664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.605 [2024-11-25 12:30:23.457783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.605 [2024-11-25 12:30:23.457789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:84000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.605 [2024-11-25 12:30:23.457793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.605 [2024-11-25 12:30:23.457798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.605 [2024-11-25 12:30:23.457803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.605 [2024-11-25 12:30:23.457810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:115104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.605 [2024-11-25 12:30:23.457814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.606 [2024-11-25 12:30:23.457820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:97528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.606 [2024-11-25 12:30:23.457824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.606 [2024-11-25 12:30:23.457830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.606 [2024-11-25 12:30:23.457835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.606 [2024-11-25 12:30:23.457841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:79392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.606 [2024-11-25 12:30:23.457845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.606 [2024-11-25 12:30:23.457851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:74960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.606 [2024-11-25 12:30:23.457856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.606 [2024-11-25 12:30:23.457862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.606 [2024-11-25 12:30:23.457866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.606 [2024-11-25 12:30:23.457872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.606 [2024-11-25 12:30:23.457876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.606 [2024-11-25 12:30:23.457882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:74720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.606 [2024-11-25 12:30:23.457886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.606 [2024-11-25 12:30:23.457892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:26808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.606 [2024-11-25 12:30:23.457896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.606 [2024-11-25 12:30:23.457902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:116416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.606 [2024-11-25 12:30:23.457906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.606 [2024-11-25 12:30:23.457912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:124000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.606 [2024-11-25 12:30:23.457916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.606 [2024-11-25 12:30:23.457922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:38464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.606 [2024-11-25 12:30:23.457927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.606 [2024-11-25 12:30:23.457932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:106760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.606 [2024-11-25 12:30:23.457937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.606 [2024-11-25 12:30:23.457943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:60880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.606 [2024-11-25 12:30:23.457947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.606 [2024-11-25 12:30:23.457953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:39552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.606 [2024-11-25 12:30:23.457957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.606 [2024-11-25 12:30:23.457963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.606 [2024-11-25 12:30:23.457967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.606 [2024-11-25 12:30:23.457973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:63736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.606 [2024-11-25 12:30:23.457977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.606 [2024-11-25 12:30:23.457983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:107664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.606 [2024-11-25 12:30:23.457987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.606 [2024-11-25 12:30:23.457993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:42912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.606 [2024-11-25 12:30:23.457997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.606 [2024-11-25 12:30:23.458003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.606 [2024-11-25 12:30:23.458008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.606 [2024-11-25 12:30:23.458015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:41776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.606 [2024-11-25 12:30:23.458019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.606 [2024-11-25 12:30:23.458025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:100376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.606 [2024-11-25 12:30:23.458030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.606 [2024-11-25 12:30:23.458036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:73792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.606 [2024-11-25 12:30:23.458040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.606 [2024-11-25 12:30:23.458046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:77080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.606 [2024-11-25 12:30:23.458051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.606 [2024-11-25 12:30:23.458057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.606 [2024-11-25 12:30:23.458073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.606 [2024-11-25 12:30:23.458079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.606 [2024-11-25 12:30:23.458084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.606 [2024-11-25 12:30:23.458089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:63568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.606 [2024-11-25 12:30:23.458094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.606 [2024-11-25 12:30:23.458099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.606 [2024-11-25 12:30:23.458104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.606 [2024-11-25 12:30:23.458110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:27.606 [2024-11-25 12:30:23.458114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.606 [2024-11-25 12:30:23.458136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:27.606 [2024-11-25 12:30:23.458145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:50016 len:8 PRP1 0x0 PRP2 0x0 00:31:27.606 [2024-11-25 12:30:23.458150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.606 [2024-11-25 12:30:23.458157] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:27.606 [2024-11-25 12:30:23.458161] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:27.606 [2024-11-25 12:30:23.458164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95048 len:8 PRP1 0x0 PRP2 0x0 00:31:27.606 [2024-11-25 12:30:23.458169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.606 [2024-11-25 12:30:23.458173] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:27.606 [2024-11-25 12:30:23.458177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:27.606 [2024-11-25 12:30:23.458180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:51832 len:8 PRP1 0x0 PRP2 0x0 00:31:27.606 [2024-11-25 12:30:23.458184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.606 [2024-11-25 12:30:23.458189] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:27.606 [2024-11-25 12:30:23.458193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:27.606 [2024-11-25 12:30:23.458197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35664 len:8 PRP1 0x0 PRP2 0x0 00:31:27.606 [2024-11-25 12:30:23.458202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.606 [2024-11-25 12:30:23.458207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:27.606 [2024-11-25 12:30:23.458210] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:27.606 [2024-11-25 12:30:23.458214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81608 len:8 PRP1 0x0 PRP2 0x0 00:31:27.606 [2024-11-25 12:30:23.458218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:27.606 [2024-11-25 12:30:23.458446] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:31:27.606 [2024-11-25 12:30:23.458496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ccf50 (9): Bad file descriptor 00:31:27.606 [2024-11-25 12:30:23.458558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:27.606 [2024-11-25 12:30:23.458572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21ccf50 with addr=10.0.0.3, port=4420 00:31:27.606 [2024-11-25 12:30:23.458578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ccf50 is same with the state(6) to be set 00:31:27.606 [2024-11-25 12:30:23.458586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ccf50 (9): Bad file descriptor 00:31:27.606 [2024-11-25 12:30:23.458595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:31:27.607 [2024-11-25 12:30:23.458599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:31:27.607 [2024-11-25 12:30:23.458604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:31:27.607 [2024-11-25 12:30:23.458609] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:31:27.607 [2024-11-25 12:30:23.458614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:31:27.864 12:30:23 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 96973 00:31:29.778 13791.00 IOPS, 53.87 MiB/s [2024-11-25T12:30:25.654Z] 9194.00 IOPS, 35.91 MiB/s [2024-11-25T12:30:25.654Z] [2024-11-25 12:30:25.458821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:29.778 [2024-11-25 12:30:25.458857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21ccf50 with addr=10.0.0.3, port=4420 00:31:29.778 [2024-11-25 12:30:25.458864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ccf50 is same with the state(6) to be set 00:31:29.778 [2024-11-25 12:30:25.458876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ccf50 (9): Bad file descriptor 00:31:29.778 [2024-11-25 12:30:25.458885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:31:29.778 [2024-11-25 12:30:25.458890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:31:29.778 [2024-11-25 12:30:25.458895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:31:29.778 [2024-11-25 12:30:25.458901] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:31:29.778 [2024-11-25 12:30:25.458906] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:31:31.644 6895.50 IOPS, 26.94 MiB/s [2024-11-25T12:30:27.520Z] 5516.40 IOPS, 21.55 MiB/s [2024-11-25T12:30:27.520Z] [2024-11-25 12:30:27.459145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:31.644 [2024-11-25 12:30:27.459186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21ccf50 with addr=10.0.0.3, port=4420 00:31:31.644 [2024-11-25 12:30:27.459193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ccf50 is same with the state(6) to be set 00:31:31.644 [2024-11-25 12:30:27.459206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ccf50 (9): Bad file descriptor 00:31:31.644 [2024-11-25 12:30:27.459215] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:31:31.644 [2024-11-25 12:30:27.459220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:31:31.644 [2024-11-25 12:30:27.459228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:31:31.644 [2024-11-25 12:30:27.459233] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:31:31.644 [2024-11-25 12:30:27.459239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:31:33.512 4597.00 IOPS, 17.96 MiB/s [2024-11-25T12:30:29.646Z] 3940.29 IOPS, 15.39 MiB/s [2024-11-25T12:30:29.646Z] [2024-11-25 12:30:29.459408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:31:33.770 [2024-11-25 12:30:29.459439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:31:33.770 [2024-11-25 12:30:29.459445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:31:33.770 [2024-11-25 12:30:29.459450] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:31:33.770 [2024-11-25 12:30:29.459456] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:31:34.704 3447.75 IOPS, 13.47 MiB/s 00:31:34.704 Latency(us) 00:31:34.704 [2024-11-25T12:30:30.580Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:34.704 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:31:34.704 NVMe0n1 : 8.11 3399.25 13.28 15.77 0.00 37433.96 1814.84 7020619.62 00:31:34.704 [2024-11-25T12:30:30.580Z] =================================================================================================================== 00:31:34.704 [2024-11-25T12:30:30.580Z] Total : 3399.25 13.28 15.77 0.00 37433.96 1814.84 7020619.62 00:31:34.704 { 00:31:34.704 "results": [ 00:31:34.704 { 00:31:34.704 "job": "NVMe0n1", 00:31:34.704 "core_mask": "0x4", 00:31:34.704 "workload": "randread", 00:31:34.704 "status": "finished", 00:31:34.704 "queue_depth": 128, 00:31:34.704 "io_size": 4096, 00:31:34.704 "runtime": 8.114136, 00:31:34.704 "iops": 3399.2528594541673, 00:31:34.704 "mibps": 13.27833148224284, 00:31:34.704 "io_failed": 128, 00:31:34.704 "io_timeout": 0, 00:31:34.704 "avg_latency_us": 37433.9563021403, 00:31:34.704 "min_latency_us": 1814.843076923077, 00:31:34.704 "max_latency_us": 7020619.618461538 00:31:34.704 } 00:31:34.704 ], 00:31:34.704 "core_count": 1 00:31:34.704 } 00:31:34.704 12:30:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:31:34.704 Attaching 5 probes... 00:31:34.704 1263.439618: reset bdev controller NVMe0 00:31:34.704 1263.522703: reconnect bdev controller NVMe0 00:31:34.704 3263.745686: reconnect delay bdev controller NVMe0 00:31:34.704 3263.765362: reconnect bdev controller NVMe0 00:31:34.704 5264.073755: reconnect delay bdev controller NVMe0 00:31:34.704 5264.086791: reconnect bdev controller NVMe0 00:31:34.704 7264.394933: reconnect delay bdev controller NVMe0 00:31:34.704 7264.408526: reconnect bdev controller NVMe0 00:31:34.704 12:30:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:31:34.704 12:30:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:31:34.704 12:30:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 96924 00:31:34.704 12:30:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:31:34.704 12:30:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 96896 00:31:34.704 12:30:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 96896 ']' 00:31:34.704 12:30:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 96896 00:31:34.704 12:30:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:31:34.704 12:30:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:34.704 12:30:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96896 00:31:34.704 killing process with pid 96896 00:31:34.704 Received shutdown signal, test time was about 8.170478 seconds 00:31:34.704 00:31:34.704 Latency(us) 00:31:34.704 [2024-11-25T12:30:30.580Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:34.704 [2024-11-25T12:30:30.580Z] =================================================================================================================== 00:31:34.704 [2024-11-25T12:30:30.580Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:34.704 12:30:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:31:34.704 12:30:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:31:34.704 12:30:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96896' 00:31:34.704 12:30:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 96896 00:31:34.704 12:30:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 96896 00:31:34.961 12:30:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:34.961 12:30:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:31:34.961 12:30:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:31:34.961 12:30:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:34.961 12:30:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:31:35.219 12:30:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:35.219 12:30:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:31:35.219 12:30:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:35.219 12:30:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:35.219 rmmod nvme_tcp 00:31:35.219 rmmod nvme_fabrics 00:31:35.219 rmmod nvme_keyring 00:31:35.219 12:30:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:35.219 12:30:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:31:35.219 12:30:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:31:35.219 12:30:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 96321 ']' 00:31:35.219 12:30:30 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 96321 00:31:35.219 12:30:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 96321 ']' 00:31:35.219 12:30:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 96321 00:31:35.219 12:30:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:31:35.219 12:30:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:35.219 12:30:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 96321 00:31:35.219 killing process with pid 96321 00:31:35.219 12:30:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:35.219 12:30:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:35.219 12:30:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 96321' 00:31:35.219 12:30:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 96321 00:31:35.219 12:30:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 96321 00:31:35.477 12:30:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:35.477 12:30:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:35.477 12:30:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:35.477 12:30:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:31:35.477 12:30:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:35.477 12:30:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:31:35.477 12:30:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:31:35.477 12:30:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:35.477 12:30:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:31:35.477 12:30:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:31:35.477 12:30:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:31:35.477 12:30:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:31:35.477 12:30:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:31:35.477 12:30:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:31:35.477 12:30:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:31:35.477 12:30:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:31:35.477 12:30:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:31:35.477 12:30:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:31:35.477 12:30:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:31:35.477 12:30:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:31:35.477 12:30:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:35.477 12:30:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:35.477 12:30:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:31:35.477 12:30:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:35.477 12:30:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:35.477 12:30:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:35.477 ************************************ 00:31:35.477 END TEST nvmf_timeout 00:31:35.477 ************************************ 00:31:35.477 12:30:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:31:35.477 00:31:35.477 real 0m44.933s 00:31:35.477 user 2m12.382s 00:31:35.477 sys 0m3.751s 00:31:35.477 12:30:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:35.477 12:30:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:31:35.736 12:30:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:31:35.736 12:30:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:31:35.736 00:31:35.736 real 5m29.757s 00:31:35.736 user 14m9.957s 00:31:35.736 sys 0m46.484s 00:31:35.736 12:30:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:35.736 12:30:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:35.736 ************************************ 00:31:35.736 END TEST nvmf_host 00:31:35.736 ************************************ 00:31:35.736 12:30:31 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:31:35.736 12:30:31 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:31:35.736 12:30:31 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:31:35.736 12:30:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:35.736 12:30:31 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:35.736 12:30:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:35.736 ************************************ 00:31:35.736 START TEST nvmf_target_core_interrupt_mode 00:31:35.736 ************************************ 00:31:35.736 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:31:35.736 * Looking for test storage... 00:31:35.736 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:31:35.736 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:35.736 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:35.736 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:31:35.736 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:35.736 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:35.736 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:35.736 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:35.736 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:31:35.736 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:31:35.736 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:31:35.736 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:31:35.736 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:31:35.736 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:31:35.736 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:31:35.736 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:35.736 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:31:35.736 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:31:35.736 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:35.736 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:35.736 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:31:35.736 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:31:35.736 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:35.736 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:31:35.736 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:31:35.736 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:31:35.736 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:31:35.736 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:35.736 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:31:35.736 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:31:35.736 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:35.736 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:35.736 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:31:35.736 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:35.736 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:35.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:35.736 --rc genhtml_branch_coverage=1 00:31:35.736 --rc genhtml_function_coverage=1 00:31:35.736 --rc genhtml_legend=1 00:31:35.736 --rc geninfo_all_blocks=1 00:31:35.736 --rc geninfo_unexecuted_blocks=1 00:31:35.736 00:31:35.736 ' 00:31:35.736 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:35.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:35.736 --rc genhtml_branch_coverage=1 00:31:35.736 --rc genhtml_function_coverage=1 00:31:35.736 --rc genhtml_legend=1 00:31:35.736 --rc geninfo_all_blocks=1 00:31:35.737 --rc geninfo_unexecuted_blocks=1 00:31:35.737 00:31:35.737 ' 00:31:35.737 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:35.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:35.737 --rc genhtml_branch_coverage=1 00:31:35.737 --rc genhtml_function_coverage=1 00:31:35.737 --rc genhtml_legend=1 00:31:35.737 --rc geninfo_all_blocks=1 00:31:35.737 --rc geninfo_unexecuted_blocks=1 00:31:35.737 00:31:35.737 ' 00:31:35.737 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:35.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:35.737 --rc genhtml_branch_coverage=1 00:31:35.737 --rc genhtml_function_coverage=1 00:31:35.737 --rc genhtml_legend=1 00:31:35.737 --rc geninfo_all_blocks=1 00:31:35.737 --rc geninfo_unexecuted_blocks=1 00:31:35.737 00:31:35.737 ' 00:31:35.737 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:31:35.737 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:31:35.737 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:35.737 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:31:35.737 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:35.737 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:35.737 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:35.737 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:35.737 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:35.737 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:35.737 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:35.737 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:35.737 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:35.737 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:35.737 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:31:35.737 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:31:35.737 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:35.737 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:35.737 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:35.737 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:35.737 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:35.737 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:31:35.737 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:35.737 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:35.737 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:35.737 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.737 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.737 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.737 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:31:35.737 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.737 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:31:35.737 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:35.737 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:35.737 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:35.737 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:35.737 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:35.737 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:35.737 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:35.737 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:35.737 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:35.737 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:35.737 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:31:35.737 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:31:35.737 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:31:35.737 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:31:35.737 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:35.737 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:35.737 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:35.737 ************************************ 00:31:35.737 START TEST nvmf_abort 00:31:35.737 ************************************ 00:31:35.737 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:31:35.996 * Looking for test storage... 00:31:35.996 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:31:35.996 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:35.996 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:31:35.996 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:35.996 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:35.996 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:35.996 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:35.996 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:35.996 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:31:35.996 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:31:35.996 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:31:35.996 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:31:35.996 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:31:35.996 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:31:35.996 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:31:35.996 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:35.996 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:31:35.996 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:31:35.996 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:35.996 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:35.996 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:31:35.996 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:31:35.996 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:35.996 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:31:35.996 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:31:35.996 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:31:35.996 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:31:35.996 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:35.996 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:31:35.996 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:31:35.996 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:35.996 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:35.996 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:31:35.996 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:35.996 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:35.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:35.996 --rc genhtml_branch_coverage=1 00:31:35.996 --rc genhtml_function_coverage=1 00:31:35.996 --rc genhtml_legend=1 00:31:35.996 --rc geninfo_all_blocks=1 00:31:35.996 --rc geninfo_unexecuted_blocks=1 00:31:35.996 00:31:35.996 ' 00:31:35.996 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:35.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:35.996 --rc genhtml_branch_coverage=1 00:31:35.996 --rc genhtml_function_coverage=1 00:31:35.996 --rc genhtml_legend=1 00:31:35.996 --rc geninfo_all_blocks=1 00:31:35.996 --rc geninfo_unexecuted_blocks=1 00:31:35.996 00:31:35.996 ' 00:31:35.996 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:35.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:35.996 --rc genhtml_branch_coverage=1 00:31:35.996 --rc genhtml_function_coverage=1 00:31:35.996 --rc genhtml_legend=1 00:31:35.996 --rc geninfo_all_blocks=1 00:31:35.996 --rc geninfo_unexecuted_blocks=1 00:31:35.996 00:31:35.996 ' 00:31:35.996 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:35.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:35.996 --rc genhtml_branch_coverage=1 00:31:35.996 --rc genhtml_function_coverage=1 00:31:35.996 --rc genhtml_legend=1 00:31:35.996 --rc geninfo_all_blocks=1 00:31:35.996 --rc geninfo_unexecuted_blocks=1 00:31:35.996 00:31:35.996 ' 00:31:35.996 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:35.996 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:31:35.996 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:35.996 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:35.996 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:35.996 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:35.996 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:35.996 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:35.996 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:35.996 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@460 -- # nvmf_veth_init 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:31:35.997 Cannot find device "nvmf_init_br" 00:31:35.997 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # true 00:31:35.998 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:31:35.998 Cannot find device "nvmf_init_br2" 00:31:35.998 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # true 00:31:35.998 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:31:35.998 Cannot find device "nvmf_tgt_br" 00:31:35.998 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@164 -- # true 00:31:35.998 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:31:35.998 Cannot find device "nvmf_tgt_br2" 00:31:35.998 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@165 -- # true 00:31:35.998 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:31:35.998 Cannot find device "nvmf_init_br" 00:31:35.998 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@166 -- # true 00:31:35.998 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:31:35.998 Cannot find device "nvmf_init_br2" 00:31:35.998 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@167 -- # true 00:31:35.998 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:31:35.998 Cannot find device "nvmf_tgt_br" 00:31:35.998 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@168 -- # true 00:31:35.998 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:31:35.998 Cannot find device "nvmf_tgt_br2" 00:31:35.998 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@169 -- # true 00:31:35.998 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:31:35.998 Cannot find device "nvmf_br" 00:31:35.998 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@170 -- # true 00:31:35.998 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:31:35.998 Cannot find device "nvmf_init_if" 00:31:35.998 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@171 -- # true 00:31:35.998 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:31:35.998 Cannot find device "nvmf_init_if2" 00:31:35.998 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@172 -- # true 00:31:35.998 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:35.998 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:35.998 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@173 -- # true 00:31:35.998 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:35.998 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:35.998 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@174 -- # true 00:31:35.998 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:31:35.998 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:35.998 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:31:35.998 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:36.255 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:36.255 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:36.255 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:36.256 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:36.256 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:31:36.256 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:31:36.256 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:31:36.256 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:31:36.256 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:31:36.256 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:31:36.256 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:31:36.256 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:31:36.256 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:31:36.256 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:36.256 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:36.256 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:36.256 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:31:36.256 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:31:36.256 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:31:36.256 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:31:36.256 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:36.256 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:36.256 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:36.256 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:31:36.256 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:31:36.256 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:31:36.256 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:36.256 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:31:36.256 12:30:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:31:36.256 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:36.256 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:31:36.256 00:31:36.256 --- 10.0.0.3 ping statistics --- 00:31:36.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:36.256 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:31:36.256 12:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:31:36.256 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:31:36.256 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.031 ms 00:31:36.256 00:31:36.256 --- 10.0.0.4 ping statistics --- 00:31:36.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:36.256 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:31:36.256 12:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:36.256 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:36.256 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:31:36.256 00:31:36.256 --- 10.0.0.1 ping statistics --- 00:31:36.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:36.256 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:31:36.256 12:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:31:36.256 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:36.256 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.034 ms 00:31:36.256 00:31:36.256 --- 10.0.0.2 ping statistics --- 00:31:36.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:36.256 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:31:36.256 12:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:36.256 12:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@461 -- # return 0 00:31:36.256 12:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:36.256 12:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:36.256 12:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:36.256 12:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:36.256 12:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:36.256 12:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:36.256 12:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:36.256 12:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:31:36.256 12:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:36.256 12:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:36.256 12:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:36.256 12:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=97388 00:31:36.256 12:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:31:36.256 12:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 97388 00:31:36.256 12:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 97388 ']' 00:31:36.256 12:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:36.256 12:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:36.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:36.256 12:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:36.256 12:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:36.256 12:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:36.256 [2024-11-25 12:30:32.065719] thread.c:3055:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:36.256 [2024-11-25 12:30:32.066514] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:31:36.256 [2024-11-25 12:30:32.066564] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:36.516 [2024-11-25 12:30:32.204932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:36.516 [2024-11-25 12:30:32.240483] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:36.516 [2024-11-25 12:30:32.240525] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:36.516 [2024-11-25 12:30:32.240532] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:36.516 [2024-11-25 12:30:32.240536] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:36.516 [2024-11-25 12:30:32.240542] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:36.516 [2024-11-25 12:30:32.241207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:36.516 [2024-11-25 12:30:32.241717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:36.516 [2024-11-25 12:30:32.241718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:36.516 [2024-11-25 12:30:32.294870] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:36.516 [2024-11-25 12:30:32.295478] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:36.516 [2024-11-25 12:30:32.295828] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:36.516 [2024-11-25 12:30:32.296451] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:37.085 12:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:37.085 12:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:31:37.085 12:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:37.085 12:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:37.085 12:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:37.343 12:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:37.343 12:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:31:37.343 12:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.343 12:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:37.343 [2024-11-25 12:30:32.998663] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:37.343 12:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.343 12:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:31:37.343 12:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.343 12:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:37.343 Malloc0 00:31:37.343 12:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.343 12:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:37.343 12:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.343 12:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:37.343 Delay0 00:31:37.343 12:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.343 12:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:37.343 12:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.343 12:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:37.343 12:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.343 12:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:31:37.343 12:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.343 12:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:37.343 12:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.343 12:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:31:37.343 12:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.343 12:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:37.343 [2024-11-25 12:30:33.070694] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:31:37.343 12:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.343 12:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:31:37.343 12:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:37.343 12:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:37.343 12:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:37.343 12:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:31:37.601 [2024-11-25 12:30:33.237467] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:31:39.500 Initializing NVMe Controllers 00:31:39.500 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:31:39.500 controller IO queue size 128 less than required 00:31:39.500 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:31:39.500 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:31:39.500 Initialization complete. Launching workers. 00:31:39.500 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 41068 00:31:39.500 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 41125, failed to submit 66 00:31:39.500 success 41068, unsuccessful 57, failed 0 00:31:39.501 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:39.501 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.501 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:39.501 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.501 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:31:39.501 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:31:39.501 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:39.501 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:31:39.501 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:39.501 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:31:39.501 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:39.501 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:39.501 rmmod nvme_tcp 00:31:39.501 rmmod nvme_fabrics 00:31:39.501 rmmod nvme_keyring 00:31:39.501 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:39.501 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:31:39.501 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:31:39.501 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 97388 ']' 00:31:39.501 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 97388 00:31:39.501 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 97388 ']' 00:31:39.501 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 97388 00:31:39.501 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:31:39.501 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:39.501 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97388 00:31:39.759 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:39.759 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:39.759 killing process with pid 97388 00:31:39.759 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97388' 00:31:39.759 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 97388 00:31:39.759 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 97388 00:31:39.759 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:39.759 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:39.759 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:39.759 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:31:39.759 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:31:39.759 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:39.759 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:31:39.759 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:39.759 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:31:39.759 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:31:39.759 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:31:39.759 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:31:39.759 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:31:39.759 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:31:39.759 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:31:39.759 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:31:39.759 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:31:39.759 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:31:40.017 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:31:40.017 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:31:40.017 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:40.017 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:40.017 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@246 -- # remove_spdk_ns 00:31:40.017 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:40.017 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:40.017 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:40.017 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@300 -- # return 0 00:31:40.017 00:31:40.017 real 0m4.169s 00:31:40.017 user 0m8.888s 00:31:40.017 sys 0m1.158s 00:31:40.017 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:40.017 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:31:40.017 ************************************ 00:31:40.017 END TEST nvmf_abort 00:31:40.017 ************************************ 00:31:40.017 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:31:40.017 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:40.017 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:40.017 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:40.017 ************************************ 00:31:40.017 START TEST nvmf_ns_hotplug_stress 00:31:40.017 ************************************ 00:31:40.017 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:31:40.017 * Looking for test storage... 00:31:40.017 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:31:40.017 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:40.017 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:31:40.017 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:40.276 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:40.276 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:40.276 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:40.276 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:40.276 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:31:40.276 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:31:40.276 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:31:40.276 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:31:40.276 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:31:40.276 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:31:40.276 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:31:40.276 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:40.276 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:31:40.276 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:31:40.276 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:40.276 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:40.276 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:31:40.276 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:31:40.276 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:40.276 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:31:40.276 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:31:40.276 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:31:40.276 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:31:40.276 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:40.276 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:31:40.276 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:31:40.276 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:40.276 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:40.276 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:31:40.276 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:40.276 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:40.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.276 --rc genhtml_branch_coverage=1 00:31:40.276 --rc genhtml_function_coverage=1 00:31:40.276 --rc genhtml_legend=1 00:31:40.276 --rc geninfo_all_blocks=1 00:31:40.276 --rc geninfo_unexecuted_blocks=1 00:31:40.276 00:31:40.276 ' 00:31:40.276 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:40.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.276 --rc genhtml_branch_coverage=1 00:31:40.276 --rc genhtml_function_coverage=1 00:31:40.276 --rc genhtml_legend=1 00:31:40.276 --rc geninfo_all_blocks=1 00:31:40.276 --rc geninfo_unexecuted_blocks=1 00:31:40.276 00:31:40.276 ' 00:31:40.276 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:40.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.276 --rc genhtml_branch_coverage=1 00:31:40.276 --rc genhtml_function_coverage=1 00:31:40.276 --rc genhtml_legend=1 00:31:40.276 --rc geninfo_all_blocks=1 00:31:40.276 --rc geninfo_unexecuted_blocks=1 00:31:40.276 00:31:40.276 ' 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:40.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.277 --rc genhtml_branch_coverage=1 00:31:40.277 --rc genhtml_function_coverage=1 00:31:40.277 --rc genhtml_legend=1 00:31:40.277 --rc geninfo_all_blocks=1 00:31:40.277 --rc geninfo_unexecuted_blocks=1 00:31:40.277 00:31:40.277 ' 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@460 -- # nvmf_veth_init 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:31:40.277 Cannot find device "nvmf_init_br" 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # true 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:31:40.277 Cannot find device "nvmf_init_br2" 00:31:40.277 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # true 00:31:40.278 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:31:40.278 Cannot find device "nvmf_tgt_br" 00:31:40.278 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@164 -- # true 00:31:40.278 12:30:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:31:40.278 Cannot find device "nvmf_tgt_br2" 00:31:40.278 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@165 -- # true 00:31:40.278 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:31:40.278 Cannot find device "nvmf_init_br" 00:31:40.278 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@166 -- # true 00:31:40.278 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:31:40.278 Cannot find device "nvmf_init_br2" 00:31:40.278 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@167 -- # true 00:31:40.278 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:31:40.278 Cannot find device "nvmf_tgt_br" 00:31:40.278 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@168 -- # true 00:31:40.278 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:31:40.278 Cannot find device "nvmf_tgt_br2" 00:31:40.278 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # true 00:31:40.278 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:31:40.278 Cannot find device "nvmf_br" 00:31:40.278 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@170 -- # true 00:31:40.278 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:31:40.278 Cannot find device "nvmf_init_if" 00:31:40.278 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # true 00:31:40.278 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:31:40.278 Cannot find device "nvmf_init_if2" 00:31:40.278 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # true 00:31:40.278 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:40.278 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:40.278 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@173 -- # true 00:31:40.278 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:40.278 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:40.278 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@174 -- # true 00:31:40.278 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:31:40.278 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:40.278 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:31:40.278 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:40.278 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:40.278 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:40.278 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:40.536 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:40.536 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:31:40.536 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:31:40.536 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:31:40.536 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:31:40.536 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:31:40.536 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:31:40.536 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:31:40.536 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:31:40.536 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:31:40.536 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:40.536 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:40.536 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:40.536 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:31:40.536 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:31:40.536 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:31:40.536 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:31:40.536 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:40.536 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:40.537 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:40.537 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:31:40.537 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:31:40.537 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:31:40.537 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:40.537 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:31:40.537 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:31:40.537 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:40.537 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:31:40.537 00:31:40.537 --- 10.0.0.3 ping statistics --- 00:31:40.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:40.537 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:31:40.537 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:31:40.537 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:31:40.537 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.034 ms 00:31:40.537 00:31:40.537 --- 10.0.0.4 ping statistics --- 00:31:40.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:40.537 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:31:40.537 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:40.537 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:40.537 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:31:40.537 00:31:40.537 --- 10.0.0.1 ping statistics --- 00:31:40.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:40.537 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:31:40.537 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:31:40.537 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:40.537 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.038 ms 00:31:40.537 00:31:40.537 --- 10.0.0.2 ping statistics --- 00:31:40.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:40.537 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:31:40.537 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:40.537 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@461 -- # return 0 00:31:40.537 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:40.537 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:40.537 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:40.537 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:40.537 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:40.537 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:40.537 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:40.537 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:31:40.537 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:40.537 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:40.537 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:40.537 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=97707 00:31:40.537 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 97707 00:31:40.537 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 97707 ']' 00:31:40.537 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:31:40.537 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:40.537 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:40.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:40.537 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:40.537 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:40.537 12:30:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:40.537 [2024-11-25 12:30:36.317917] thread.c:3055:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:40.537 [2024-11-25 12:30:36.318688] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:31:40.537 [2024-11-25 12:30:36.318732] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:40.795 [2024-11-25 12:30:36.452625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:40.795 [2024-11-25 12:30:36.493553] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:40.795 [2024-11-25 12:30:36.493599] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:40.795 [2024-11-25 12:30:36.493606] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:40.795 [2024-11-25 12:30:36.493611] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:40.795 [2024-11-25 12:30:36.493616] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:40.795 [2024-11-25 12:30:36.494443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:40.795 [2024-11-25 12:30:36.494603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:40.795 [2024-11-25 12:30:36.494612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:40.795 [2024-11-25 12:30:36.563337] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:40.795 [2024-11-25 12:30:36.564090] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:40.795 [2024-11-25 12:30:36.564190] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:40.795 [2024-11-25 12:30:36.565174] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:41.360 12:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:41.360 12:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:31:41.361 12:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:41.361 12:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:41.361 12:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:41.361 12:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:41.361 12:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:31:41.361 12:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:41.698 [2024-11-25 12:30:37.423537] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:41.698 12:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:41.955 12:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:31:42.212 [2024-11-25 12:30:37.840112] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:31:42.213 12:30:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:31:42.213 12:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:31:42.470 Malloc0 00:31:42.470 12:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:42.727 Delay0 00:31:42.727 12:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:42.985 12:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:31:43.243 NULL1 00:31:43.243 12:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:31:43.499 12:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:31:43.499 12:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=97834 00:31:43.499 12:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97834 00:31:43.499 12:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:44.868 Read completed with error (sct=0, sc=11) 00:31:44.868 12:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:44.868 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:44.868 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:44.868 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:44.868 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:44.868 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:44.868 12:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:31:44.868 12:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:31:44.868 true 00:31:44.868 12:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97834 00:31:44.868 12:30:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:45.800 12:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:46.057 12:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:31:46.057 12:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:31:46.057 true 00:31:46.057 12:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97834 00:31:46.057 12:30:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:46.315 12:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:46.315 12:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:31:46.315 12:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:31:46.572 true 00:31:46.572 12:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97834 00:31:46.572 12:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:46.829 12:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:47.087 12:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:31:47.087 12:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:31:47.087 true 00:31:47.087 12:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97834 00:31:47.087 12:30:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:47.377 12:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:47.635 12:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:31:47.635 12:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:31:47.893 true 00:31:47.893 12:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97834 00:31:47.893 12:30:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:48.824 12:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:49.081 12:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:31:49.081 12:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:31:49.081 true 00:31:49.337 12:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97834 00:31:49.337 12:30:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:49.337 12:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:49.594 12:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:31:49.594 12:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:31:49.852 true 00:31:49.852 12:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97834 00:31:49.852 12:30:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:50.783 12:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:51.040 12:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:31:51.041 12:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:31:51.298 true 00:31:51.298 12:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97834 00:31:51.298 12:30:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:51.556 12:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:51.556 12:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:31:51.556 12:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:31:51.812 true 00:31:51.812 12:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97834 00:31:51.812 12:30:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:52.744 12:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:53.001 12:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:31:53.001 12:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:31:53.270 true 00:31:53.270 12:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97834 00:31:53.270 12:30:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:53.527 12:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:53.527 12:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:31:53.527 12:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:31:53.784 true 00:31:53.784 12:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97834 00:31:53.784 12:30:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:54.715 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:54.972 12:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:54.972 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:54.972 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:54.972 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:54.972 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:54.972 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:54.972 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:54.972 12:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:31:54.972 12:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:31:55.229 true 00:31:55.229 12:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97834 00:31:55.229 12:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:56.160 12:30:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:56.416 12:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:31:56.416 12:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:31:56.416 true 00:31:56.417 12:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97834 00:31:56.417 12:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:56.674 12:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:56.931 12:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:31:56.931 12:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:31:57.188 true 00:31:57.188 12:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97834 00:31:57.188 12:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:58.117 12:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:58.374 12:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:31:58.374 12:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:31:58.374 true 00:31:58.631 12:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97834 00:31:58.631 12:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:58.631 12:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:58.887 12:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:31:58.887 12:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:31:59.144 true 00:31:59.144 12:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97834 00:31:59.144 12:30:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:00.075 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:00.076 12:30:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:00.333 12:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:32:00.333 12:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:32:00.590 true 00:32:00.590 12:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97834 00:32:00.590 12:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:00.847 12:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:00.847 12:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:32:00.847 12:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:32:01.104 true 00:32:01.104 12:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97834 00:32:01.104 12:30:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:02.036 12:30:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:02.293 12:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:32:02.293 12:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:32:02.550 true 00:32:02.550 12:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97834 00:32:02.550 12:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:02.807 12:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:03.064 12:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:32:03.064 12:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:32:03.064 true 00:32:03.064 12:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97834 00:32:03.064 12:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:04.436 12:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:04.436 12:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:32:04.436 12:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:32:04.436 true 00:32:04.436 12:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97834 00:32:04.436 12:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:04.693 12:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:04.950 12:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:32:04.950 12:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:32:05.208 true 00:32:05.208 12:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97834 00:32:05.208 12:31:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:06.192 12:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:06.463 12:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:32:06.463 12:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:32:06.463 true 00:32:06.463 12:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97834 00:32:06.463 12:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:06.719 12:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:06.977 12:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:32:06.977 12:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:32:07.234 true 00:32:07.234 12:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97834 00:32:07.234 12:31:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:08.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:08.168 12:31:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:08.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:08.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:08.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:08.168 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:08.425 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:08.425 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:32:08.425 12:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:32:08.425 12:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:32:08.683 true 00:32:08.683 12:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97834 00:32:08.683 12:31:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:09.616 12:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:09.616 12:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:32:09.616 12:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:32:09.873 true 00:32:09.873 12:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97834 00:32:09.873 12:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:10.130 12:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:10.130 12:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:32:10.130 12:31:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:32:10.388 true 00:32:10.388 12:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97834 00:32:10.388 12:31:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:11.320 12:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:11.578 12:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:32:11.578 12:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:32:11.836 true 00:32:11.836 12:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97834 00:32:11.836 12:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:12.093 12:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:12.351 12:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:32:12.351 12:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:32:12.351 true 00:32:12.351 12:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97834 00:32:12.351 12:31:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:13.721 12:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:13.721 Initializing NVMe Controllers 00:32:13.721 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:32:13.721 Controller IO queue size 128, less than required. 00:32:13.721 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:13.721 Controller IO queue size 128, less than required. 00:32:13.721 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:13.721 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:13.721 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:32:13.721 Initialization complete. Launching workers. 00:32:13.721 ======================================================== 00:32:13.721 Latency(us) 00:32:13.721 Device Information : IOPS MiB/s Average min max 00:32:13.721 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 768.63 0.38 96783.04 2509.42 1047132.33 00:32:13.721 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 16350.14 7.98 7828.85 2486.41 434393.70 00:32:13.721 ======================================================== 00:32:13.721 Total : 17118.78 8.36 11822.90 2486.41 1047132.33 00:32:13.721 00:32:13.721 12:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:32:13.721 12:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:32:13.721 true 00:32:13.721 12:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 97834 00:32:13.721 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (97834) - No such process 00:32:13.721 12:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 97834 00:32:13.722 12:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:13.979 12:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:14.237 12:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:32:14.237 12:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:32:14.237 12:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:32:14.237 12:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:14.237 12:31:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:32:14.237 null0 00:32:14.237 12:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:14.237 12:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:14.237 12:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:32:14.494 null1 00:32:14.494 12:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:14.494 12:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:14.494 12:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:32:14.494 null2 00:32:14.752 12:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:14.752 12:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:14.752 12:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:32:14.752 null3 00:32:14.752 12:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:14.752 12:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:14.752 12:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:32:15.009 null4 00:32:15.009 12:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:15.009 12:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:15.009 12:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:32:15.266 null5 00:32:15.266 12:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:15.266 12:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:15.266 12:31:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:32:15.266 null6 00:32:15.266 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:15.266 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:15.266 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:32:15.532 null7 00:32:15.532 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:32:15.532 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:32:15.532 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:32:15.532 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:15.532 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:15.532 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:15.532 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:15.532 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:32:15.532 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:32:15.532 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:15.532 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:15.532 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:15.532 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:15.532 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:32:15.532 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:15.532 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:32:15.532 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:15.532 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:15.532 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:15.532 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:15.532 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:15.532 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:15.532 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:32:15.532 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:15.532 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:32:15.532 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:15.532 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:15.532 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:15.532 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:15.532 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:32:15.532 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:15.532 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:32:15.532 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:15.532 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:15.532 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:15.532 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:15.532 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:15.532 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:32:15.532 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:32:15.532 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:15.532 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:15.532 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:15.532 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:15.532 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:15.532 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:15.532 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:32:15.532 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:15.532 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:32:15.532 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:15.532 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:15.532 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:15.532 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:15.532 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:15.533 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:32:15.533 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:32:15.533 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:15.533 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:15.533 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:15.533 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:15.533 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:15.533 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:32:15.533 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:32:15.533 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:32:15.533 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 98867 98868 98870 98872 98874 98876 98878 98881 00:32:15.533 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:32:15.533 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:32:15.533 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:32:15.533 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:15.533 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:15.801 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:15.801 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:15.801 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:15.801 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:15.801 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:15.801 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:15.801 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:15.801 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:16.060 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.060 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.060 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:16.060 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.060 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.060 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:16.060 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.060 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.060 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:16.060 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.060 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.060 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:16.060 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.060 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.060 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:16.060 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.060 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.060 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:16.060 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.060 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.060 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:16.060 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.060 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.060 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:16.060 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:16.319 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:16.319 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:16.319 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:16.319 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:16.319 12:31:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:16.319 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:16.319 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:16.319 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.319 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.319 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:16.319 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.319 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.319 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:16.319 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.319 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.319 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:16.319 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.319 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.319 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:16.577 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.577 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.577 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:16.577 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.577 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.577 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:16.577 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.577 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.577 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:16.577 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.577 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.577 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:16.577 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:16.577 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:16.577 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:16.577 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:16.835 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:16.835 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:16.835 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:16.835 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:16.835 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.835 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.835 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:16.835 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.835 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.835 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:16.835 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.835 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.835 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:16.835 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:16.835 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:16.835 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:17.093 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.093 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.093 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:17.093 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.093 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.093 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:17.093 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.093 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.094 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:17.094 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.094 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.094 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:17.094 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:17.094 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:17.094 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:17.094 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:17.094 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:17.094 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:17.094 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:17.094 12:31:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:17.352 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.352 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.352 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:17.352 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.352 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.352 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:17.352 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.352 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.352 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:17.352 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.352 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.352 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:17.352 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.352 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.352 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:17.352 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.352 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.352 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:17.352 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.352 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.352 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:17.610 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.610 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.610 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:17.610 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:17.610 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:17.610 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:17.610 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:17.610 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:17.610 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:17.610 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:17.610 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.610 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.610 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:17.610 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:17.868 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.868 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.868 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:17.868 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.868 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.868 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:17.868 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.868 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.868 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:17.868 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.868 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.868 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:17.868 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.868 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.868 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:17.868 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.868 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.868 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:17.868 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:17.868 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:17.868 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:17.868 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:17.868 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:17.868 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:18.126 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:18.126 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:18.126 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.126 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.126 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:18.126 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:18.126 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.126 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.126 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:18.126 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:18.126 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.126 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.126 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:18.126 12:31:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:18.384 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.384 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.385 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:18.385 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:18.385 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.385 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.385 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:18.385 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.385 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.385 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:18.385 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:18.385 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.385 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.385 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:18.385 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:18.385 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:18.385 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.385 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.385 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:18.642 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:18.642 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.642 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.642 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:18.642 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:18.642 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.642 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.642 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:18.642 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:18.642 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.642 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.642 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:18.642 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:18.642 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.642 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.642 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:18.900 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.900 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.900 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:18.900 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:18.900 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.900 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.900 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:18.900 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.900 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.900 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:18.900 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.900 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.900 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:18.900 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:18.900 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:18.900 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:18.900 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:18.900 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:18.900 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:18.900 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:19.159 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:19.159 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:19.159 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:19.159 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:19.159 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:19.159 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:19.159 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:19.159 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:19.159 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:19.159 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:19.159 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:19.159 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:19.159 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:19.159 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:19.159 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:19.159 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:19.159 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:19.159 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:19.159 12:31:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:19.159 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:19.159 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:19.159 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:19.418 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:19.418 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:19.418 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:19.418 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:19.418 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:19.418 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:19.418 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:19.418 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:19.418 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:32:19.418 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:19.418 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:19.418 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:19.676 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:19.676 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:19.676 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:32:19.676 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:19.676 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:19.676 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:19.676 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:32:19.676 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:32:19.676 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:19.676 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:19.676 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:32:19.676 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:19.676 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:19.676 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:32:19.676 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:19.677 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:19.677 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:32:19.677 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:19.677 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:19.677 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:32:19.677 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:32:19.677 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:19.677 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:19.677 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:32:19.677 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:19.677 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:19.677 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:32:19.935 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:32:19.935 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:19.935 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:32:19.935 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:32:19.935 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:19.935 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:19.935 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:19.935 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:19.935 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:32:19.935 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:19.935 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:20.194 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:20.194 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:20.194 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:20.194 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:20.194 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:20.194 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:20.194 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:32:20.194 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:32:20.194 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:32:20.194 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:32:20.194 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:20.194 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:32:20.194 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:20.194 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:32:20.194 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:20.194 12:31:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:20.194 rmmod nvme_tcp 00:32:20.194 rmmod nvme_fabrics 00:32:20.194 rmmod nvme_keyring 00:32:20.194 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:20.194 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:32:20.194 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:32:20.194 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 97707 ']' 00:32:20.194 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 97707 00:32:20.194 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 97707 ']' 00:32:20.194 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 97707 00:32:20.194 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:32:20.194 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:20.194 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97707 00:32:20.194 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:20.194 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:20.194 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97707' 00:32:20.194 killing process with pid 97707 00:32:20.194 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 97707 00:32:20.194 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 97707 00:32:20.452 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:20.452 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:20.452 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:20.452 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:32:20.452 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:32:20.452 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:20.452 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:32:20.452 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:20.452 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:32:20.452 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:32:20.452 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:32:20.452 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:32:20.452 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:32:20.452 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:32:20.452 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:32:20.452 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:32:20.452 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:32:20.452 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:32:20.452 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:32:20.710 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:32:20.710 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:20.710 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:20.710 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@246 -- # remove_spdk_ns 00:32:20.710 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:20.710 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:20.710 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:20.710 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@300 -- # return 0 00:32:20.710 00:32:20.710 real 0m40.635s 00:32:20.710 user 2m58.060s 00:32:20.710 sys 0m11.905s 00:32:20.710 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:20.710 ************************************ 00:32:20.710 END TEST nvmf_ns_hotplug_stress 00:32:20.710 ************************************ 00:32:20.710 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:32:20.710 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:32:20.710 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:20.710 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:20.710 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:20.710 ************************************ 00:32:20.710 START TEST nvmf_delete_subsystem 00:32:20.710 ************************************ 00:32:20.710 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:32:20.710 * Looking for test storage... 00:32:20.710 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:32:20.710 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:20.710 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:32:20.710 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:20.710 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:20.710 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:20.710 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:20.710 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:20.710 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:32:20.711 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:32:20.711 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:32:20.711 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:32:20.711 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:32:20.711 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:32:20.711 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:32:20.711 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:20.711 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:32:20.711 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:32:20.711 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:20.711 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:20.711 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:32:20.711 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:32:20.711 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:20.711 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:32:20.711 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:32:20.711 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:32:20.711 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:32:20.711 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:20.711 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:32:20.711 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:32:20.711 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:20.711 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:20.711 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:32:20.711 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:20.711 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:20.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:20.711 --rc genhtml_branch_coverage=1 00:32:20.711 --rc genhtml_function_coverage=1 00:32:20.711 --rc genhtml_legend=1 00:32:20.711 --rc geninfo_all_blocks=1 00:32:20.711 --rc geninfo_unexecuted_blocks=1 00:32:20.711 00:32:20.711 ' 00:32:20.711 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:20.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:20.711 --rc genhtml_branch_coverage=1 00:32:20.711 --rc genhtml_function_coverage=1 00:32:20.711 --rc genhtml_legend=1 00:32:20.711 --rc geninfo_all_blocks=1 00:32:20.711 --rc geninfo_unexecuted_blocks=1 00:32:20.711 00:32:20.711 ' 00:32:20.711 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:20.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:20.711 --rc genhtml_branch_coverage=1 00:32:20.711 --rc genhtml_function_coverage=1 00:32:20.711 --rc genhtml_legend=1 00:32:20.711 --rc geninfo_all_blocks=1 00:32:20.711 --rc geninfo_unexecuted_blocks=1 00:32:20.711 00:32:20.711 ' 00:32:20.711 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:20.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:20.711 --rc genhtml_branch_coverage=1 00:32:20.711 --rc genhtml_function_coverage=1 00:32:20.711 --rc genhtml_legend=1 00:32:20.711 --rc geninfo_all_blocks=1 00:32:20.711 --rc geninfo_unexecuted_blocks=1 00:32:20.711 00:32:20.711 ' 00:32:20.711 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:20.970 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:32:20.970 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:20.970 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@460 -- # nvmf_veth_init 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:32:20.971 Cannot find device "nvmf_init_br" 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # true 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:32:20.971 Cannot find device "nvmf_init_br2" 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # true 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:32:20.971 Cannot find device "nvmf_tgt_br" 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@164 -- # true 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:32:20.971 Cannot find device "nvmf_tgt_br2" 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@165 -- # true 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:32:20.971 Cannot find device "nvmf_init_br" 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@166 -- # true 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:32:20.971 Cannot find device "nvmf_init_br2" 00:32:20.971 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@167 -- # true 00:32:20.972 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:32:20.972 Cannot find device "nvmf_tgt_br" 00:32:20.972 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@168 -- # true 00:32:20.972 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:32:20.972 Cannot find device "nvmf_tgt_br2" 00:32:20.972 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # true 00:32:20.972 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:32:20.972 Cannot find device "nvmf_br" 00:32:20.972 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@170 -- # true 00:32:20.972 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:32:20.972 Cannot find device "nvmf_init_if" 00:32:20.972 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # true 00:32:20.972 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:32:20.972 Cannot find device "nvmf_init_if2" 00:32:20.972 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # true 00:32:20.972 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:20.972 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:20.972 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@173 -- # true 00:32:20.972 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:20.972 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:20.972 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@174 -- # true 00:32:20.972 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:32:20.972 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:20.972 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:32:20.972 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:20.972 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:20.972 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:20.972 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:20.972 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:20.972 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:32:20.972 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:32:20.972 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:32:20.972 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:32:20.972 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:32:20.972 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:32:20.972 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:32:20.972 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:32:20.972 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:32:20.972 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:20.972 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:20.972 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:20.972 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:32:20.972 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:32:20.972 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:32:20.972 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:32:20.972 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:21.231 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:21.231 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:21.231 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:32:21.231 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:32:21.231 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:32:21.231 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:21.231 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:32:21.231 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:32:21.231 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:21.231 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:32:21.231 00:32:21.231 --- 10.0.0.3 ping statistics --- 00:32:21.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:21.231 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:32:21.231 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:32:21.231 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:32:21.231 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.080 ms 00:32:21.231 00:32:21.231 --- 10.0.0.4 ping statistics --- 00:32:21.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:21.231 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:32:21.231 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:21.231 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:21.231 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:32:21.231 00:32:21.231 --- 10.0.0.1 ping statistics --- 00:32:21.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:21.231 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:32:21.231 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:32:21.231 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:21.231 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:32:21.231 00:32:21.231 --- 10.0.0.2 ping statistics --- 00:32:21.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:21.231 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:32:21.231 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:21.231 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@461 -- # return 0 00:32:21.231 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:21.231 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:21.231 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:21.231 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:21.231 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:21.231 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:21.231 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:21.231 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:32:21.231 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:21.231 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:21.231 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:21.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:21.231 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=100258 00:32:21.231 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 100258 00:32:21.231 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 100258 ']' 00:32:21.231 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:21.231 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:21.231 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:32:21.231 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:21.231 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:21.231 12:31:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:21.231 [2024-11-25 12:31:16.946538] thread.c:3055:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:21.231 [2024-11-25 12:31:16.947314] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:32:21.231 [2024-11-25 12:31:16.947355] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:21.231 [2024-11-25 12:31:17.085720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:21.490 [2024-11-25 12:31:17.127002] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:21.490 [2024-11-25 12:31:17.127045] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:21.490 [2024-11-25 12:31:17.127052] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:21.490 [2024-11-25 12:31:17.127057] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:21.490 [2024-11-25 12:31:17.127062] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:21.490 [2024-11-25 12:31:17.127794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:21.490 [2024-11-25 12:31:17.127819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:21.490 [2024-11-25 12:31:17.199062] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:21.490 [2024-11-25 12:31:17.199834] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:21.490 [2024-11-25 12:31:17.200111] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:22.055 12:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:22.055 12:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:32:22.055 12:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:22.055 12:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:22.055 12:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:22.055 12:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:22.055 12:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:22.055 12:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.055 12:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:22.055 [2024-11-25 12:31:17.864633] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:22.055 12:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.055 12:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:22.056 12:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.056 12:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:22.056 12:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.056 12:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:32:22.056 12:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.056 12:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:22.056 [2024-11-25 12:31:17.888997] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:32:22.056 12:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.056 12:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:32:22.056 12:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.056 12:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:22.056 NULL1 00:32:22.056 12:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.056 12:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:22.056 12:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.056 12:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:22.056 Delay0 00:32:22.056 12:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.056 12:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:22.056 12:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.056 12:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:22.056 12:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.056 12:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=100309 00:32:22.056 12:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:32:22.056 12:31:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:32:22.314 [2024-11-25 12:31:18.076418] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:32:24.212 12:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:24.212 12:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.212 12:31:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:24.472 Write completed with error (sct=0, sc=8) 00:32:24.472 Read completed with error (sct=0, sc=8) 00:32:24.472 Read completed with error (sct=0, sc=8) 00:32:24.472 Write completed with error (sct=0, sc=8) 00:32:24.472 starting I/O failed: -6 00:32:24.472 Read completed with error (sct=0, sc=8) 00:32:24.472 Write completed with error (sct=0, sc=8) 00:32:24.472 Read completed with error (sct=0, sc=8) 00:32:24.472 Write completed with error (sct=0, sc=8) 00:32:24.472 starting I/O failed: -6 00:32:24.472 Read completed with error (sct=0, sc=8) 00:32:24.472 Write completed with error (sct=0, sc=8) 00:32:24.472 Read completed with error (sct=0, sc=8) 00:32:24.472 Write completed with error (sct=0, sc=8) 00:32:24.472 starting I/O failed: -6 00:32:24.472 Write completed with error (sct=0, sc=8) 00:32:24.472 Read completed with error (sct=0, sc=8) 00:32:24.472 Read completed with error (sct=0, sc=8) 00:32:24.472 Read completed with error (sct=0, sc=8) 00:32:24.472 starting I/O failed: -6 00:32:24.472 Read completed with error (sct=0, sc=8) 00:32:24.472 Read completed with error (sct=0, sc=8) 00:32:24.472 Read completed with error (sct=0, sc=8) 00:32:24.472 Write completed with error (sct=0, sc=8) 00:32:24.472 starting I/O failed: -6 00:32:24.472 Write completed with error (sct=0, sc=8) 00:32:24.472 Read completed with error (sct=0, sc=8) 00:32:24.472 Write completed with error (sct=0, sc=8) 00:32:24.472 Write completed with error (sct=0, sc=8) 00:32:24.472 starting I/O failed: -6 00:32:24.472 Read completed with error (sct=0, sc=8) 00:32:24.472 Write completed with error (sct=0, sc=8) 00:32:24.472 Read completed with error (sct=0, sc=8) 00:32:24.472 Read completed with error (sct=0, sc=8) 00:32:24.472 starting I/O failed: -6 00:32:24.472 Read completed with error (sct=0, sc=8) 00:32:24.472 Read completed with error (sct=0, sc=8) 00:32:24.472 Read completed with error (sct=0, sc=8) 00:32:24.472 Read completed with error (sct=0, sc=8) 00:32:24.472 starting I/O failed: -6 00:32:24.472 Read completed with error (sct=0, sc=8) 00:32:24.472 Write completed with error (sct=0, sc=8) 00:32:24.472 Read completed with error (sct=0, sc=8) 00:32:24.472 Read completed with error (sct=0, sc=8) 00:32:24.472 starting I/O failed: -6 00:32:24.472 Read completed with error (sct=0, sc=8) 00:32:24.472 Read completed with error (sct=0, sc=8) 00:32:24.472 Read completed with error (sct=0, sc=8) 00:32:24.472 Read completed with error (sct=0, sc=8) 00:32:24.472 starting I/O failed: -6 00:32:24.472 Read completed with error (sct=0, sc=8) 00:32:24.472 Read completed with error (sct=0, sc=8) 00:32:24.472 [2024-11-25 12:31:20.102365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23497e0 is same with the state(6) to be set 00:32:24.472 Read completed with error (sct=0, sc=8) 00:32:24.472 Read completed with error (sct=0, sc=8) 00:32:24.472 Read completed with error (sct=0, sc=8) 00:32:24.472 Read completed with error (sct=0, sc=8) 00:32:24.472 Write completed with error (sct=0, sc=8) 00:32:24.472 Write completed with error (sct=0, sc=8) 00:32:24.472 Read completed with error (sct=0, sc=8) 00:32:24.472 Write completed with error (sct=0, sc=8) 00:32:24.472 Read completed with error (sct=0, sc=8) 00:32:24.472 Read completed with error (sct=0, sc=8) 00:32:24.472 Read completed with error (sct=0, sc=8) 00:32:24.472 Read completed with error (sct=0, sc=8) 00:32:24.472 Read completed with error (sct=0, sc=8) 00:32:24.472 Read completed with error (sct=0, sc=8) 00:32:24.472 Read completed with error (sct=0, sc=8) 00:32:24.472 Read completed with error (sct=0, sc=8) 00:32:24.472 Read completed with error (sct=0, sc=8) 00:32:24.472 Write completed with error (sct=0, sc=8) 00:32:24.472 Read completed with error (sct=0, sc=8) 00:32:24.472 Read completed with error (sct=0, sc=8) 00:32:24.473 Write completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 Write completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 Write completed with error (sct=0, sc=8) 00:32:24.473 Write completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 Write completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 Write completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 Write completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 Write completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 Write completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 Write completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 Write completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 starting I/O failed: -6 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 Write completed with error (sct=0, sc=8) 00:32:24.473 Write completed with error (sct=0, sc=8) 00:32:24.473 starting I/O failed: -6 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 Write completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 starting I/O failed: -6 00:32:24.473 Write completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 starting I/O failed: -6 00:32:24.473 Write completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 starting I/O failed: -6 00:32:24.473 Write completed with error (sct=0, sc=8) 00:32:24.473 Write completed with error (sct=0, sc=8) 00:32:24.473 Write completed with error (sct=0, sc=8) 00:32:24.473 Write completed with error (sct=0, sc=8) 00:32:24.473 starting I/O failed: -6 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 Write completed with error (sct=0, sc=8) 00:32:24.473 starting I/O failed: -6 00:32:24.473 Write completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 Write completed with error (sct=0, sc=8) 00:32:24.473 Write completed with error (sct=0, sc=8) 00:32:24.473 starting I/O failed: -6 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 Write completed with error (sct=0, sc=8) 00:32:24.473 starting I/O failed: -6 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 Write completed with error (sct=0, sc=8) 00:32:24.473 Write completed with error (sct=0, sc=8) 00:32:24.473 starting I/O failed: -6 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 [2024-11-25 12:31:20.103937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c7000d500 is same with the state(6) to be set 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 starting I/O failed: -6 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 starting I/O failed: -6 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 Write completed with error (sct=0, sc=8) 00:32:24.473 starting I/O failed: -6 00:32:24.473 Write completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 starting I/O failed: -6 00:32:24.473 Write completed with error (sct=0, sc=8) 00:32:24.473 Write completed with error (sct=0, sc=8) 00:32:24.473 starting I/O failed: -6 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 Write completed with error (sct=0, sc=8) 00:32:24.473 starting I/O failed: -6 00:32:24.473 Write completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 starting I/O failed: -6 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 Write completed with error (sct=0, sc=8) 00:32:24.473 starting I/O failed: -6 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 starting I/O failed: -6 00:32:24.473 Write completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 starting I/O failed: -6 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 starting I/O failed: -6 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 starting I/O failed: -6 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 starting I/O failed: -6 00:32:24.473 Write completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 starting I/O failed: -6 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 starting I/O failed: -6 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 Write completed with error (sct=0, sc=8) 00:32:24.473 starting I/O failed: -6 00:32:24.473 Write completed with error (sct=0, sc=8) 00:32:24.473 Write completed with error (sct=0, sc=8) 00:32:24.473 starting I/O failed: -6 00:32:24.473 Write completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 starting I/O failed: -6 00:32:24.473 Write completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 starting I/O failed: -6 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 starting I/O failed: -6 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 starting I/O failed: -6 00:32:24.473 Write completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 starting I/O failed: -6 00:32:24.473 Write completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 starting I/O failed: -6 00:32:24.473 Write completed with error (sct=0, sc=8) 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 starting I/O failed: -6 00:32:24.473 Read completed with error (sct=0, sc=8) 00:32:24.473 [2024-11-25 12:31:20.104606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c70000c70 is same with the state(6) to be set 00:32:25.411 [2024-11-25 12:31:21.088336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2344ee0 is same with the state(6) to be set 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Write completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Write completed with error (sct=0, sc=8) 00:32:25.411 Write completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Write completed with error (sct=0, sc=8) 00:32:25.411 Write completed with error (sct=0, sc=8) 00:32:25.411 Write completed with error (sct=0, sc=8) 00:32:25.411 Write completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 [2024-11-25 12:31:21.103794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348a50 is same with the state(6) to be set 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Write completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Write completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Write completed with error (sct=0, sc=8) 00:32:25.411 Write completed with error (sct=0, sc=8) 00:32:25.411 Write completed with error (sct=0, sc=8) 00:32:25.411 Write completed with error (sct=0, sc=8) 00:32:25.411 Write completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 [2024-11-25 12:31:21.103933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x234bea0 is same with the state(6) to be set 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Write completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Write completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Write completed with error (sct=0, sc=8) 00:32:25.411 Write completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Write completed with error (sct=0, sc=8) 00:32:25.411 Write completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Write completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Write completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Write completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Write completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Write completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Write completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 [2024-11-25 12:31:21.104405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c7000d830 is same with the state(6) to be set 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Write completed with error (sct=0, sc=8) 00:32:25.411 Write completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Write completed with error (sct=0, sc=8) 00:32:25.411 Write completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Write completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Write completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Write completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Read completed with error (sct=0, sc=8) 00:32:25.411 Write completed with error (sct=0, sc=8) 00:32:25.411 [2024-11-25 12:31:21.104852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c7000d050 is same with the state(6) to be set 00:32:25.411 Initializing NVMe Controllers 00:32:25.411 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:32:25.411 Controller IO queue size 128, less than required. 00:32:25.411 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:25.411 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:25.411 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:25.411 Initialization complete. Launching workers. 00:32:25.411 ======================================================== 00:32:25.411 Latency(us) 00:32:25.411 Device Information : IOPS MiB/s Average min max 00:32:25.412 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 161.68 0.08 911831.95 256.38 1009353.42 00:32:25.412 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 166.16 0.08 962524.66 687.11 2002144.71 00:32:25.412 ======================================================== 00:32:25.412 Total : 327.84 0.16 937524.46 256.38 2002144.71 00:32:25.412 00:32:25.412 [2024-11-25 12:31:21.105637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2344ee0 (9): Bad file descriptor 00:32:25.412 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:32:25.412 12:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.412 12:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:32:25.412 12:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 100309 00:32:25.412 12:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:32:25.980 12:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:32:25.980 12:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 100309 00:32:25.980 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (100309) - No such process 00:32:25.980 12:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 100309 00:32:25.980 12:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:32:25.980 12:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 100309 00:32:25.980 12:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:32:25.980 12:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:25.980 12:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:32:25.980 12:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:25.980 12:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 100309 00:32:25.980 12:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:32:25.980 12:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:25.980 12:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:25.980 12:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:25.980 12:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:25.980 12:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.980 12:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:25.980 12:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.980 12:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:32:25.980 12:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.980 12:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:25.980 [2024-11-25 12:31:21.633036] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:32:25.980 12:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.980 12:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:32:25.980 12:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.980 12:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:25.980 12:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:25.980 12:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=100349 00:32:25.980 12:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:32:25.980 12:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100349 00:32:25.980 12:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:25.980 12:31:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:32:25.980 [2024-11-25 12:31:21.798628] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:32:26.544 12:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:26.544 12:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100349 00:32:26.544 12:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:26.802 12:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:26.802 12:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100349 00:32:26.802 12:31:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:27.403 12:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:27.403 12:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100349 00:32:27.403 12:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:27.999 12:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:27.999 12:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100349 00:32:27.999 12:31:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:28.564 12:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:28.564 12:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100349 00:32:28.564 12:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:28.822 12:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:28.822 12:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100349 00:32:28.822 12:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:32:29.079 Initializing NVMe Controllers 00:32:29.079 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:32:29.079 Controller IO queue size 128, less than required. 00:32:29.079 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:29.079 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:29.079 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:29.079 Initialization complete. Launching workers. 00:32:29.079 ======================================================== 00:32:29.079 Latency(us) 00:32:29.079 Device Information : IOPS MiB/s Average min max 00:32:29.079 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002291.91 1000096.44 1041011.81 00:32:29.079 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003619.74 1000273.93 1009576.22 00:32:29.079 ======================================================== 00:32:29.079 Total : 256.00 0.12 1002955.83 1000096.44 1041011.81 00:32:29.079 00:32:29.336 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:32:29.336 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 100349 00:32:29.336 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (100349) - No such process 00:32:29.336 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 100349 00:32:29.336 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:32:29.336 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:32:29.336 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:29.336 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:32:29.902 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:29.902 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:32:29.902 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:29.902 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:29.902 rmmod nvme_tcp 00:32:29.902 rmmod nvme_fabrics 00:32:29.902 rmmod nvme_keyring 00:32:29.902 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:29.902 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:32:29.902 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:32:29.902 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 100258 ']' 00:32:29.902 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 100258 00:32:29.902 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 100258 ']' 00:32:29.902 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 100258 00:32:29.902 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:32:29.902 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:29.902 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100258 00:32:29.902 killing process with pid 100258 00:32:29.902 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:29.902 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:29.902 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100258' 00:32:29.902 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 100258 00:32:29.902 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 100258 00:32:29.902 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:29.902 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:29.902 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:29.902 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:32:29.902 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:32:29.902 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:29.902 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:32:29.902 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:29.902 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:32:29.902 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:32:29.902 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:32:30.160 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:32:30.160 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:32:30.160 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:32:30.160 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:32:30.160 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:32:30.160 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:32:30.160 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:32:30.160 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:32:30.160 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:32:30.160 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:30.160 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:30.160 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@246 -- # remove_spdk_ns 00:32:30.160 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:30.160 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:30.160 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:30.160 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@300 -- # return 0 00:32:30.160 ************************************ 00:32:30.160 END TEST nvmf_delete_subsystem 00:32:30.160 ************************************ 00:32:30.160 00:32:30.160 real 0m9.523s 00:32:30.160 user 0m24.671s 00:32:30.160 sys 0m2.123s 00:32:30.160 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:30.161 12:31:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:32:30.161 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:32:30.161 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:30.161 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:30.161 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:30.161 ************************************ 00:32:30.161 START TEST nvmf_host_management 00:32:30.161 ************************************ 00:32:30.161 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:32:30.420 * Looking for test storage... 00:32:30.420 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:30.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:30.420 --rc genhtml_branch_coverage=1 00:32:30.420 --rc genhtml_function_coverage=1 00:32:30.420 --rc genhtml_legend=1 00:32:30.420 --rc geninfo_all_blocks=1 00:32:30.420 --rc geninfo_unexecuted_blocks=1 00:32:30.420 00:32:30.420 ' 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:30.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:30.420 --rc genhtml_branch_coverage=1 00:32:30.420 --rc genhtml_function_coverage=1 00:32:30.420 --rc genhtml_legend=1 00:32:30.420 --rc geninfo_all_blocks=1 00:32:30.420 --rc geninfo_unexecuted_blocks=1 00:32:30.420 00:32:30.420 ' 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:30.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:30.420 --rc genhtml_branch_coverage=1 00:32:30.420 --rc genhtml_function_coverage=1 00:32:30.420 --rc genhtml_legend=1 00:32:30.420 --rc geninfo_all_blocks=1 00:32:30.420 --rc geninfo_unexecuted_blocks=1 00:32:30.420 00:32:30.420 ' 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:30.420 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:30.420 --rc genhtml_branch_coverage=1 00:32:30.420 --rc genhtml_function_coverage=1 00:32:30.420 --rc genhtml_legend=1 00:32:30.420 --rc geninfo_all_blocks=1 00:32:30.420 --rc geninfo_unexecuted_blocks=1 00:32:30.420 00:32:30.420 ' 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.420 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:32:30.421 Cannot find device "nvmf_init_br" 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:32:30.421 Cannot find device "nvmf_init_br2" 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:32:30.421 Cannot find device "nvmf_tgt_br" 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:32:30.421 Cannot find device "nvmf_tgt_br2" 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:32:30.421 Cannot find device "nvmf_init_br" 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:32:30.421 Cannot find device "nvmf_init_br2" 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:32:30.421 Cannot find device "nvmf_tgt_br" 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:32:30.421 Cannot find device "nvmf_tgt_br2" 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:32:30.421 Cannot find device "nvmf_br" 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:32:30.421 Cannot find device "nvmf_init_if" 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:32:30.421 Cannot find device "nvmf_init_if2" 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:30.421 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:30.421 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:32:30.421 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:30.679 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:32:30.679 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:30.679 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:32:30.680 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:30.680 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:32:30.680 00:32:30.680 --- 10.0.0.3 ping statistics --- 00:32:30.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:30.680 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:32:30.680 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:32:30.680 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.029 ms 00:32:30.680 00:32:30.680 --- 10.0.0.4 ping statistics --- 00:32:30.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:30.680 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:30.680 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:30.680 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:32:30.680 00:32:30.680 --- 10.0.0.1 ping statistics --- 00:32:30.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:30.680 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:32:30.680 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:30.680 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.040 ms 00:32:30.680 00:32:30.680 --- 10.0.0.2 ping statistics --- 00:32:30.680 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:30.680 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=100644 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 100644 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 100644 ']' 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:30.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:30.680 12:31:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:30.680 [2024-11-25 12:31:26.505977] thread.c:3055:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:30.680 [2024-11-25 12:31:26.506762] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:32:30.680 [2024-11-25 12:31:26.506803] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:30.938 [2024-11-25 12:31:26.645636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:30.938 [2024-11-25 12:31:26.688817] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:30.938 [2024-11-25 12:31:26.688858] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:30.938 [2024-11-25 12:31:26.688865] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:30.938 [2024-11-25 12:31:26.688870] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:30.938 [2024-11-25 12:31:26.688875] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:30.938 [2024-11-25 12:31:26.689656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:30.938 [2024-11-25 12:31:26.689832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:30.938 [2024-11-25 12:31:26.689832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:30.938 [2024-11-25 12:31:26.689781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:30.938 [2024-11-25 12:31:26.762827] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:30.938 [2024-11-25 12:31:26.763707] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:30.938 [2024-11-25 12:31:26.764673] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:30.938 [2024-11-25 12:31:26.764911] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:30.938 [2024-11-25 12:31:26.765227] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:31.871 12:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:31.871 12:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:32:31.871 12:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:31.871 12:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:31.871 12:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:31.871 12:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:31.871 12:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:31.871 12:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.871 12:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:31.871 [2024-11-25 12:31:27.430416] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:31.871 12:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.871 12:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:32:31.871 12:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:31.871 12:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:31.871 12:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:32:31.871 12:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:32:31.871 12:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:32:31.871 12:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.871 12:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:31.871 Malloc0 00:32:31.871 [2024-11-25 12:31:27.510584] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:32:31.871 12:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.871 12:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:32:31.871 12:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:31.871 12:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:31.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:31.871 12:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=100716 00:32:31.871 12:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 100716 /var/tmp/bdevperf.sock 00:32:31.871 12:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 100716 ']' 00:32:31.871 12:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:31.871 12:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:32:31.871 12:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:31.871 12:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:32:31.871 12:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:32:31.871 12:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:32:31.871 12:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:31.871 12:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:31.871 12:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:31.871 { 00:32:31.871 "params": { 00:32:31.871 "name": "Nvme$subsystem", 00:32:31.871 "trtype": "$TEST_TRANSPORT", 00:32:31.871 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:31.871 "adrfam": "ipv4", 00:32:31.871 "trsvcid": "$NVMF_PORT", 00:32:31.871 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:31.871 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:31.871 "hdgst": ${hdgst:-false}, 00:32:31.871 "ddgst": ${ddgst:-false} 00:32:31.871 }, 00:32:31.871 "method": "bdev_nvme_attach_controller" 00:32:31.871 } 00:32:31.871 EOF 00:32:31.871 )") 00:32:31.871 12:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:31.871 12:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:31.871 12:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:32:31.872 12:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:32:31.872 12:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:32:31.872 12:31:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:31.872 "params": { 00:32:31.872 "name": "Nvme0", 00:32:31.872 "trtype": "tcp", 00:32:31.872 "traddr": "10.0.0.3", 00:32:31.872 "adrfam": "ipv4", 00:32:31.872 "trsvcid": "4420", 00:32:31.872 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:31.872 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:31.872 "hdgst": false, 00:32:31.872 "ddgst": false 00:32:31.872 }, 00:32:31.872 "method": "bdev_nvme_attach_controller" 00:32:31.872 }' 00:32:31.872 [2024-11-25 12:31:27.583959] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:32:31.872 [2024-11-25 12:31:27.584019] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100716 ] 00:32:31.872 [2024-11-25 12:31:27.719288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:32.129 [2024-11-25 12:31:27.761975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:32.129 Running I/O for 10 seconds... 00:32:32.696 12:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:32.696 12:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:32:32.696 12:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:32:32.696 12:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.696 12:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:32.696 12:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.696 12:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:32.696 12:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:32:32.696 12:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:32:32.696 12:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:32:32.696 12:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:32:32.696 12:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:32:32.696 12:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:32:32.696 12:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:32:32.696 12:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:32:32.696 12:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.696 12:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:32:32.696 12:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:32.696 12:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.696 12:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1219 00:32:32.696 12:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1219 -ge 100 ']' 00:32:32.696 12:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:32:32.696 12:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:32:32.696 12:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:32:32.696 12:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:32:32.696 12:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.696 12:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:32.697 [2024-11-25 12:31:28.523227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 [2024-11-25 12:31:28.523578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fb9ce0 is same with the state(6) to be set 00:32:32.697 12:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.697 12:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:32:32.697 12:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.697 12:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:32.697 [2024-11-25 12:31:28.534696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:32.697 [2024-11-25 12:31:28.534735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.697 [2024-11-25 12:31:28.534743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:32.697 [2024-11-25 12:31:28.534758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.697 [2024-11-25 12:31:28.534765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:32.697 [2024-11-25 12:31:28.534770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.697 [2024-11-25 12:31:28.534777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:32.697 [2024-11-25 12:31:28.534781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.697 [2024-11-25 12:31:28.534787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fd660 is same with the state(6) to be set 00:32:32.697 12:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.697 12:31:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:32:32.697 [2024-11-25 12:31:28.537703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.697 [2024-11-25 12:31:28.537724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.697 [2024-11-25 12:31:28.537737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.697 [2024-11-25 12:31:28.537743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.697 [2024-11-25 12:31:28.537759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.698 [2024-11-25 12:31:28.537764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.698 [2024-11-25 12:31:28.537770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.698 [2024-11-25 12:31:28.537776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.698 [2024-11-25 12:31:28.537782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.698 [2024-11-25 12:31:28.537786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.698 [2024-11-25 12:31:28.537792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.698 [2024-11-25 12:31:28.537798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.698 [2024-11-25 12:31:28.537804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.698 [2024-11-25 12:31:28.537809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.698 [2024-11-25 12:31:28.537815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.698 [2024-11-25 12:31:28.537819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.698 [2024-11-25 12:31:28.537825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.698 [2024-11-25 12:31:28.537830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.698 [2024-11-25 12:31:28.537835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.698 [2024-11-25 12:31:28.537840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.698 [2024-11-25 12:31:28.537846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.698 [2024-11-25 12:31:28.537850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.698 [2024-11-25 12:31:28.537856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.698 [2024-11-25 12:31:28.537861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.698 [2024-11-25 12:31:28.537866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.698 [2024-11-25 12:31:28.537871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.698 [2024-11-25 12:31:28.537877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.698 [2024-11-25 12:31:28.537881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.698 [2024-11-25 12:31:28.537887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.698 [2024-11-25 12:31:28.537892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.698 [2024-11-25 12:31:28.537897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.698 [2024-11-25 12:31:28.537902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.698 [2024-11-25 12:31:28.537908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.698 [2024-11-25 12:31:28.537914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.698 [2024-11-25 12:31:28.537920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.698 [2024-11-25 12:31:28.537925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.698 [2024-11-25 12:31:28.537932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.698 [2024-11-25 12:31:28.537936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.698 [2024-11-25 12:31:28.537942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.698 [2024-11-25 12:31:28.537946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.698 [2024-11-25 12:31:28.537952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.698 [2024-11-25 12:31:28.537957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.698 [2024-11-25 12:31:28.537963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.698 [2024-11-25 12:31:28.537968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.698 [2024-11-25 12:31:28.537975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.698 [2024-11-25 12:31:28.537980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.698 [2024-11-25 12:31:28.537986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.698 [2024-11-25 12:31:28.537990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.698 [2024-11-25 12:31:28.537997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.698 [2024-11-25 12:31:28.538001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.698 [2024-11-25 12:31:28.538008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.698 [2024-11-25 12:31:28.538019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.698 [2024-11-25 12:31:28.538026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.698 [2024-11-25 12:31:28.538030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.698 [2024-11-25 12:31:28.538036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.698 [2024-11-25 12:31:28.538041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.698 [2024-11-25 12:31:28.538046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.698 [2024-11-25 12:31:28.538051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.698 [2024-11-25 12:31:28.538057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.698 [2024-11-25 12:31:28.538062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.698 [2024-11-25 12:31:28.538068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.698 [2024-11-25 12:31:28.538072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.698 [2024-11-25 12:31:28.538078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.698 [2024-11-25 12:31:28.538084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.698 [2024-11-25 12:31:28.538091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.698 [2024-11-25 12:31:28.538096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.698 [2024-11-25 12:31:28.538102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.698 [2024-11-25 12:31:28.538107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.698 [2024-11-25 12:31:28.538113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.698 [2024-11-25 12:31:28.538118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.698 [2024-11-25 12:31:28.538124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.698 [2024-11-25 12:31:28.538128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.698 [2024-11-25 12:31:28.538134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.698 [2024-11-25 12:31:28.538139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.698 [2024-11-25 12:31:28.538145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:45440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.698 [2024-11-25 12:31:28.538150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.698 [2024-11-25 12:31:28.538156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.698 [2024-11-25 12:31:28.538161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.698 [2024-11-25 12:31:28.538167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:45696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.698 [2024-11-25 12:31:28.538172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.698 [2024-11-25 12:31:28.538179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:45824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.698 [2024-11-25 12:31:28.538183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.698 [2024-11-25 12:31:28.538190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:45952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.699 [2024-11-25 12:31:28.538195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.699 [2024-11-25 12:31:28.538201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:46080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.699 [2024-11-25 12:31:28.538206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.699 [2024-11-25 12:31:28.538211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:46208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.699 [2024-11-25 12:31:28.538216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.699 [2024-11-25 12:31:28.538222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:46336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.699 [2024-11-25 12:31:28.538226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.699 [2024-11-25 12:31:28.538233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:46464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.699 [2024-11-25 12:31:28.538238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.699 [2024-11-25 12:31:28.538244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:46592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.699 [2024-11-25 12:31:28.538248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.699 [2024-11-25 12:31:28.538254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:46720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.699 [2024-11-25 12:31:28.538260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.699 [2024-11-25 12:31:28.538267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:46848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.699 [2024-11-25 12:31:28.538273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.699 [2024-11-25 12:31:28.538279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:46976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.699 [2024-11-25 12:31:28.538284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.699 [2024-11-25 12:31:28.538291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:47104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.699 [2024-11-25 12:31:28.538295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.699 [2024-11-25 12:31:28.538301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:47232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.699 [2024-11-25 12:31:28.538305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.699 [2024-11-25 12:31:28.538311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:47360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.699 [2024-11-25 12:31:28.538316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.699 [2024-11-25 12:31:28.538322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:47488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.699 [2024-11-25 12:31:28.538326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.699 [2024-11-25 12:31:28.538332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:47616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.699 [2024-11-25 12:31:28.538336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.699 [2024-11-25 12:31:28.538342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:47744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.699 [2024-11-25 12:31:28.538346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.699 [2024-11-25 12:31:28.538352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:47872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.699 [2024-11-25 12:31:28.538357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.699 [2024-11-25 12:31:28.538363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:48000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.699 [2024-11-25 12:31:28.538367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.699 [2024-11-25 12:31:28.538373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:48128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.699 [2024-11-25 12:31:28.538377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.699 [2024-11-25 12:31:28.538383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:48256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.699 [2024-11-25 12:31:28.538387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.699 [2024-11-25 12:31:28.538393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:48384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.699 [2024-11-25 12:31:28.538397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.699 [2024-11-25 12:31:28.538403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:48512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.699 [2024-11-25 12:31:28.538407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.699 [2024-11-25 12:31:28.538413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:48640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.699 [2024-11-25 12:31:28.538417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.699 [2024-11-25 12:31:28.538423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:48768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:32.699 [2024-11-25 12:31:28.538427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:32.699 task offset: 40704 on job bdev=Nvme0n1 fails 00:32:32.699 00:32:32.699 Latency(us) 00:32:32.699 [2024-11-25T12:31:28.575Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:32.699 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:32.699 Job: Nvme0n1 ended in about 0.63 seconds with error 00:32:32.699 Verification LBA range: start 0x0 length 0x400 00:32:32.699 Nvme0n1 : 0.63 2145.69 134.11 102.33 0.00 27926.34 1039.75 26214.40 00:32:32.699 [2024-11-25T12:31:28.575Z] =================================================================================================================== 00:32:32.699 [2024-11-25T12:31:28.575Z] Total : 2145.69 134.11 102.33 0.00 27926.34 1039.75 26214.40 00:32:32.699 [2024-11-25 12:31:28.539370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:32.699 [2024-11-25 12:31:28.540829] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:32.699 [2024-11-25 12:31:28.540848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19fd660 (9): Bad file descriptor 00:32:32.699 [2024-11-25 12:31:28.543187] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:32:34.073 12:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 100716 00:32:34.073 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (100716) - No such process 00:32:34.073 12:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:32:34.073 12:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:32:34.073 12:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:32:34.073 12:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:32:34.073 12:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:32:34.073 12:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:32:34.073 12:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:34.073 12:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:34.073 { 00:32:34.073 "params": { 00:32:34.073 "name": "Nvme$subsystem", 00:32:34.073 "trtype": "$TEST_TRANSPORT", 00:32:34.073 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:34.073 "adrfam": "ipv4", 00:32:34.073 "trsvcid": "$NVMF_PORT", 00:32:34.073 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:34.073 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:34.073 "hdgst": ${hdgst:-false}, 00:32:34.073 "ddgst": ${ddgst:-false} 00:32:34.073 }, 00:32:34.073 "method": "bdev_nvme_attach_controller" 00:32:34.073 } 00:32:34.073 EOF 00:32:34.073 )") 00:32:34.073 12:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:32:34.073 12:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:32:34.074 12:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:32:34.074 12:31:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:34.074 "params": { 00:32:34.074 "name": "Nvme0", 00:32:34.074 "trtype": "tcp", 00:32:34.074 "traddr": "10.0.0.3", 00:32:34.074 "adrfam": "ipv4", 00:32:34.074 "trsvcid": "4420", 00:32:34.074 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:34.074 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:34.074 "hdgst": false, 00:32:34.074 "ddgst": false 00:32:34.074 }, 00:32:34.074 "method": "bdev_nvme_attach_controller" 00:32:34.074 }' 00:32:34.074 [2024-11-25 12:31:29.580407] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:32:34.074 [2024-11-25 12:31:29.580463] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100761 ] 00:32:34.074 [2024-11-25 12:31:29.716326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:34.074 [2024-11-25 12:31:29.753328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:34.074 Running I/O for 1 seconds... 00:32:35.447 2432.00 IOPS, 152.00 MiB/s 00:32:35.447 Latency(us) 00:32:35.447 [2024-11-25T12:31:31.323Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:35.447 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:35.447 Verification LBA range: start 0x0 length 0x400 00:32:35.447 Nvme0n1 : 1.01 2475.22 154.70 0.00 0.00 25446.73 3755.72 23492.14 00:32:35.447 [2024-11-25T12:31:31.323Z] =================================================================================================================== 00:32:35.447 [2024-11-25T12:31:31.323Z] Total : 2475.22 154.70 0.00 0.00 25446.73 3755.72 23492.14 00:32:35.447 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:32:35.447 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:32:35.447 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:32:35.447 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:32:35.447 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:32:35.447 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:35.447 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:32:35.447 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:35.447 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:32:35.447 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:35.447 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:35.447 rmmod nvme_tcp 00:32:35.447 rmmod nvme_fabrics 00:32:35.447 rmmod nvme_keyring 00:32:35.447 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:35.447 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:32:35.447 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:32:35.447 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 100644 ']' 00:32:35.447 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 100644 00:32:35.447 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 100644 ']' 00:32:35.448 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 100644 00:32:35.448 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:32:35.448 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:35.448 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100644 00:32:35.448 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:35.448 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:35.448 killing process with pid 100644 00:32:35.448 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100644' 00:32:35.448 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 100644 00:32:35.448 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 100644 00:32:35.448 [2024-11-25 12:31:31.271303] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:32:35.448 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:35.448 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:35.448 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:35.448 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:32:35.448 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:32:35.448 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:32:35.448 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:35.448 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:35.448 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:32:35.448 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:32:35.448 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:32:35.709 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:32:35.709 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:32:35.709 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:32:35.709 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:32:35.709 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:32:35.709 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:32:35.709 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:32:35.709 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:32:35.710 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:32:35.710 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:35.710 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:35.710 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:32:35.710 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:35.710 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:35.710 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:35.710 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:32:35.710 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:32:35.710 00:32:35.710 real 0m5.499s 00:32:35.710 user 0m16.950s 00:32:35.710 sys 0m1.670s 00:32:35.710 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:35.710 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:35.710 ************************************ 00:32:35.710 END TEST nvmf_host_management 00:32:35.710 ************************************ 00:32:35.710 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:32:35.710 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:35.710 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:35.710 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:35.710 ************************************ 00:32:35.710 START TEST nvmf_lvol 00:32:35.710 ************************************ 00:32:35.710 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:32:35.972 * Looking for test storage... 00:32:35.972 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:32:35.972 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:35.972 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:32:35.972 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:35.972 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:35.972 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:35.972 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:35.972 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:35.972 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:32:35.972 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:32:35.972 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:32:35.972 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:32:35.972 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:32:35.972 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:32:35.972 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:32:35.972 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:35.972 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:32:35.972 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:32:35.972 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:35.972 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:35.972 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:32:35.972 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:32:35.972 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:35.972 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:32:35.972 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:32:35.972 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:32:35.972 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:32:35.972 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:35.972 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:32:35.972 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:32:35.972 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:35.972 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:35.972 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:32:35.972 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:35.972 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:35.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:35.972 --rc genhtml_branch_coverage=1 00:32:35.972 --rc genhtml_function_coverage=1 00:32:35.972 --rc genhtml_legend=1 00:32:35.972 --rc geninfo_all_blocks=1 00:32:35.972 --rc geninfo_unexecuted_blocks=1 00:32:35.972 00:32:35.972 ' 00:32:35.972 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:35.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:35.973 --rc genhtml_branch_coverage=1 00:32:35.973 --rc genhtml_function_coverage=1 00:32:35.973 --rc genhtml_legend=1 00:32:35.973 --rc geninfo_all_blocks=1 00:32:35.973 --rc geninfo_unexecuted_blocks=1 00:32:35.973 00:32:35.973 ' 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:35.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:35.973 --rc genhtml_branch_coverage=1 00:32:35.973 --rc genhtml_function_coverage=1 00:32:35.973 --rc genhtml_legend=1 00:32:35.973 --rc geninfo_all_blocks=1 00:32:35.973 --rc geninfo_unexecuted_blocks=1 00:32:35.973 00:32:35.973 ' 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:35.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:35.973 --rc genhtml_branch_coverage=1 00:32:35.973 --rc genhtml_function_coverage=1 00:32:35.973 --rc genhtml_legend=1 00:32:35.973 --rc geninfo_all_blocks=1 00:32:35.973 --rc geninfo_unexecuted_blocks=1 00:32:35.973 00:32:35.973 ' 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:35.973 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:35.974 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:32:35.974 Cannot find device "nvmf_init_br" 00:32:35.974 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:32:35.974 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:32:35.974 Cannot find device "nvmf_init_br2" 00:32:35.974 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:32:35.974 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:32:35.974 Cannot find device "nvmf_tgt_br" 00:32:35.974 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:32:35.974 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:32:35.974 Cannot find device "nvmf_tgt_br2" 00:32:35.974 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:32:35.974 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:32:35.974 Cannot find device "nvmf_init_br" 00:32:35.974 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:32:35.974 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:32:35.974 Cannot find device "nvmf_init_br2" 00:32:35.974 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:32:35.974 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:32:35.974 Cannot find device "nvmf_tgt_br" 00:32:35.974 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:32:35.974 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:32:35.974 Cannot find device "nvmf_tgt_br2" 00:32:35.974 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:32:35.974 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:32:35.974 Cannot find device "nvmf_br" 00:32:35.974 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:32:35.974 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:32:35.974 Cannot find device "nvmf_init_if" 00:32:35.974 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:32:35.974 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:32:35.974 Cannot find device "nvmf_init_if2" 00:32:36.235 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:32:36.235 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:36.235 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:36.235 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:32:36.235 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:36.235 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:36.235 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:32:36.235 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:32:36.235 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:36.235 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:32:36.235 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:36.235 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:36.235 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:36.235 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:36.235 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:36.235 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:32:36.235 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:32:36.235 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:32:36.235 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:32:36.235 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:32:36.235 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:32:36.235 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:32:36.235 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:32:36.235 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:32:36.235 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:36.235 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:36.235 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:36.235 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:32:36.235 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:32:36.235 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:32:36.235 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:32:36.235 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:36.235 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:36.235 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:36.235 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:32:36.235 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:32:36.235 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:32:36.235 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:36.235 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:32:36.235 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:32:36.235 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:36.236 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:32:36.236 00:32:36.236 --- 10.0.0.3 ping statistics --- 00:32:36.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:36.236 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:32:36.236 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:32:36.236 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:32:36.236 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:32:36.236 00:32:36.236 --- 10.0.0.4 ping statistics --- 00:32:36.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:36.236 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:32:36.236 12:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:36.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:36.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:32:36.236 00:32:36.236 --- 10.0.0.1 ping statistics --- 00:32:36.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:36.236 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:32:36.236 12:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:32:36.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:36.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:32:36.236 00:32:36.236 --- 10.0.0.2 ping statistics --- 00:32:36.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:36.236 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:32:36.236 12:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:36.236 12:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:32:36.236 12:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:36.236 12:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:36.236 12:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:36.236 12:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:36.236 12:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:36.236 12:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:36.236 12:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:36.236 12:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:32:36.236 12:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:36.236 12:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:36.236 12:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:36.236 12:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=101023 00:32:36.236 12:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 101023 00:32:36.236 12:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 101023 ']' 00:32:36.236 12:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:36.236 12:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:36.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:36.236 12:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:36.236 12:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:36.236 12:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:36.236 12:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:32:36.236 [2024-11-25 12:31:32.057038] thread.c:3055:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:36.236 [2024-11-25 12:31:32.057730] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:32:36.236 [2024-11-25 12:31:32.057781] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:36.496 [2024-11-25 12:31:32.192627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:36.496 [2024-11-25 12:31:32.234271] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:36.496 [2024-11-25 12:31:32.234309] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:36.496 [2024-11-25 12:31:32.234317] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:36.496 [2024-11-25 12:31:32.234322] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:36.496 [2024-11-25 12:31:32.234326] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:36.496 [2024-11-25 12:31:32.235171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:36.496 [2024-11-25 12:31:32.235311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:36.496 [2024-11-25 12:31:32.235503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:36.496 [2024-11-25 12:31:32.304997] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:36.496 [2024-11-25 12:31:32.306036] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:36.496 [2024-11-25 12:31:32.306651] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:36.496 [2024-11-25 12:31:32.306746] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:37.438 12:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:37.438 12:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:32:37.438 12:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:37.438 12:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:37.438 12:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:37.438 12:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:37.438 12:31:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:37.438 [2024-11-25 12:31:33.204235] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:37.438 12:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:37.697 12:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:32:37.697 12:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:37.957 12:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:32:37.957 12:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:32:38.218 12:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:32:38.218 12:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=550d3b2d-414e-4ab0-806c-f95379bc70b4 00:32:38.218 12:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 550d3b2d-414e-4ab0-806c-f95379bc70b4 lvol 20 00:32:38.479 12:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=52336da5-0f9a-443b-9a60-354fbca9537b 00:32:38.479 12:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:38.739 12:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 52336da5-0f9a-443b-9a60-354fbca9537b 00:32:38.997 12:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:32:39.254 [2024-11-25 12:31:34.888145] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:32:39.254 12:31:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:32:39.254 12:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=101160 00:32:39.254 12:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:32:39.254 12:31:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:32:40.626 12:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 52336da5-0f9a-443b-9a60-354fbca9537b MY_SNAPSHOT 00:32:40.626 12:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=980ae464-742d-4812-a7c1-920284405383 00:32:40.626 12:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 52336da5-0f9a-443b-9a60-354fbca9537b 30 00:32:40.892 12:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 980ae464-742d-4812-a7c1-920284405383 MY_CLONE 00:32:41.150 12:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=7ecb1e62-e723-440d-a098-b947d278f370 00:32:41.150 12:31:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 7ecb1e62-e723-440d-a098-b947d278f370 00:32:41.406 12:31:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 101160 00:32:49.501 Initializing NVMe Controllers 00:32:49.501 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:32:49.501 Controller IO queue size 128, less than required. 00:32:49.501 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:49.501 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:32:49.501 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:32:49.501 Initialization complete. Launching workers. 00:32:49.501 ======================================================== 00:32:49.501 Latency(us) 00:32:49.501 Device Information : IOPS MiB/s Average min max 00:32:49.501 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15350.40 59.96 8339.08 3320.07 68031.99 00:32:49.501 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15895.30 62.09 8054.64 243.86 58251.85 00:32:49.501 ======================================================== 00:32:49.501 Total : 31245.70 122.05 8194.38 243.86 68031.99 00:32:49.501 00:32:49.501 12:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:49.758 12:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 52336da5-0f9a-443b-9a60-354fbca9537b 00:32:50.016 12:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 550d3b2d-414e-4ab0-806c-f95379bc70b4 00:32:50.274 12:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:32:50.274 12:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:32:50.274 12:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:32:50.274 12:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:50.274 12:31:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:32:50.274 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:50.274 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:32:50.274 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:50.274 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:50.274 rmmod nvme_tcp 00:32:50.274 rmmod nvme_fabrics 00:32:50.274 rmmod nvme_keyring 00:32:50.274 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:50.274 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:32:50.274 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:32:50.274 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 101023 ']' 00:32:50.274 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 101023 00:32:50.274 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 101023 ']' 00:32:50.274 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 101023 00:32:50.274 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:32:50.274 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:50.274 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101023 00:32:50.274 killing process with pid 101023 00:32:50.274 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:50.274 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:50.274 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101023' 00:32:50.274 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 101023 00:32:50.274 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 101023 00:32:50.531 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:50.531 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:50.531 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:50.531 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:32:50.531 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:32:50.532 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:50.532 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:32:50.532 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:50.532 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:32:50.532 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:32:50.532 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:32:50.532 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:32:50.532 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:32:50.532 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:32:50.532 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:32:50.532 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:32:50.532 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:32:50.532 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:32:50.532 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:32:50.789 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:32:50.789 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:50.789 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:50.789 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:32:50.789 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:50.789 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:50.789 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:50.789 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:32:50.789 00:32:50.789 real 0m14.951s 00:32:50.789 user 0m53.721s 00:32:50.789 sys 0m5.281s 00:32:50.789 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:50.789 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:50.789 ************************************ 00:32:50.789 END TEST nvmf_lvol 00:32:50.789 ************************************ 00:32:50.789 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:32:50.789 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:50.789 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:50.789 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:50.789 ************************************ 00:32:50.789 START TEST nvmf_lvs_grow 00:32:50.789 ************************************ 00:32:50.789 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:32:50.789 * Looking for test storage... 00:32:50.789 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:32:50.789 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:50.789 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:50.789 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:32:51.048 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:51.048 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:51.048 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:51.048 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:51.048 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:32:51.048 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:32:51.048 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:32:51.048 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:32:51.048 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:32:51.048 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:32:51.048 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:32:51.048 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:51.048 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:32:51.048 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:32:51.048 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:51.048 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:51.048 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:32:51.048 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:32:51.048 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:51.048 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:32:51.048 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:32:51.048 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:32:51.048 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:32:51.048 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:51.048 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:32:51.048 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:32:51.048 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:51.048 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:51.048 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:32:51.048 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:51.048 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:51.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:51.048 --rc genhtml_branch_coverage=1 00:32:51.048 --rc genhtml_function_coverage=1 00:32:51.048 --rc genhtml_legend=1 00:32:51.048 --rc geninfo_all_blocks=1 00:32:51.048 --rc geninfo_unexecuted_blocks=1 00:32:51.048 00:32:51.048 ' 00:32:51.048 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:51.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:51.048 --rc genhtml_branch_coverage=1 00:32:51.048 --rc genhtml_function_coverage=1 00:32:51.048 --rc genhtml_legend=1 00:32:51.048 --rc geninfo_all_blocks=1 00:32:51.048 --rc geninfo_unexecuted_blocks=1 00:32:51.048 00:32:51.048 ' 00:32:51.048 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:51.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:51.048 --rc genhtml_branch_coverage=1 00:32:51.048 --rc genhtml_function_coverage=1 00:32:51.048 --rc genhtml_legend=1 00:32:51.048 --rc geninfo_all_blocks=1 00:32:51.048 --rc geninfo_unexecuted_blocks=1 00:32:51.048 00:32:51.048 ' 00:32:51.048 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:51.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:51.048 --rc genhtml_branch_coverage=1 00:32:51.048 --rc genhtml_function_coverage=1 00:32:51.048 --rc genhtml_legend=1 00:32:51.048 --rc geninfo_all_blocks=1 00:32:51.048 --rc geninfo_unexecuted_blocks=1 00:32:51.048 00:32:51.048 ' 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:32:51.049 Cannot find device "nvmf_init_br" 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:32:51.049 Cannot find device "nvmf_init_br2" 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:32:51.049 Cannot find device "nvmf_tgt_br" 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:32:51.049 Cannot find device "nvmf_tgt_br2" 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:32:51.049 Cannot find device "nvmf_init_br" 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:32:51.049 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:32:51.049 Cannot find device "nvmf_init_br2" 00:32:51.050 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:32:51.050 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:32:51.050 Cannot find device "nvmf_tgt_br" 00:32:51.050 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:32:51.050 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:32:51.050 Cannot find device "nvmf_tgt_br2" 00:32:51.050 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:32:51.050 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:32:51.050 Cannot find device "nvmf_br" 00:32:51.050 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:32:51.050 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:32:51.050 Cannot find device "nvmf_init_if" 00:32:51.050 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:32:51.050 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:32:51.050 Cannot find device "nvmf_init_if2" 00:32:51.050 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:32:51.050 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:51.050 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:51.050 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:32:51.050 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:51.050 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:51.050 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:32:51.050 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:32:51.050 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:51.050 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:32:51.050 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:51.050 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:51.050 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:51.050 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:51.050 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:51.050 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:32:51.050 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:32:51.050 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:32:51.050 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:32:51.050 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:32:51.050 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:32:51.050 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:32:51.050 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:32:51.050 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:32:51.050 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:51.050 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:51.050 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:51.050 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:32:51.050 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:32:51.050 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:32:51.308 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:32:51.308 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:51.308 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:51.308 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:51.308 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:32:51.308 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:32:51.308 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:32:51.308 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:51.308 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:32:51.308 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:32:51.308 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:51.308 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:32:51.308 00:32:51.308 --- 10.0.0.3 ping statistics --- 00:32:51.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:51.308 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:32:51.308 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:32:51.308 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:32:51.308 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.038 ms 00:32:51.308 00:32:51.308 --- 10.0.0.4 ping statistics --- 00:32:51.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:51.308 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:32:51.308 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:51.308 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:51.308 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:32:51.308 00:32:51.308 --- 10.0.0.1 ping statistics --- 00:32:51.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:51.308 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:32:51.308 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:32:51.308 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:51.308 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.032 ms 00:32:51.308 00:32:51.308 --- 10.0.0.2 ping statistics --- 00:32:51.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:51.308 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:32:51.308 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:51.308 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:32:51.308 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:51.308 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:51.308 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:51.308 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:51.308 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:51.308 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:51.308 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:51.308 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:32:51.308 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:51.308 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:51.308 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:51.308 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=101567 00:32:51.308 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 101567 00:32:51.308 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 101567 ']' 00:32:51.308 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:51.308 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:51.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:51.308 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:51.308 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:51.308 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:51.308 12:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:51.308 [2024-11-25 12:31:47.025875] thread.c:3055:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:51.309 [2024-11-25 12:31:47.026564] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:32:51.309 [2024-11-25 12:31:47.026598] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:51.309 [2024-11-25 12:31:47.160988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:51.566 [2024-11-25 12:31:47.197152] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:51.566 [2024-11-25 12:31:47.197188] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:51.566 [2024-11-25 12:31:47.197194] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:51.566 [2024-11-25 12:31:47.197197] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:51.566 [2024-11-25 12:31:47.197201] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:51.566 [2024-11-25 12:31:47.197426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:51.566 [2024-11-25 12:31:47.262703] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:51.566 [2024-11-25 12:31:47.262937] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:52.131 12:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:52.131 12:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:32:52.131 12:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:52.131 12:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:52.131 12:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:52.131 12:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:52.131 12:31:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:52.387 [2024-11-25 12:31:48.118124] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:52.387 12:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:32:52.387 12:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:52.387 12:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:52.387 12:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:52.387 ************************************ 00:32:52.387 START TEST lvs_grow_clean 00:32:52.387 ************************************ 00:32:52.388 12:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:32:52.388 12:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:52.388 12:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:52.388 12:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:52.388 12:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:52.388 12:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:52.388 12:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:52.388 12:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:32:52.388 12:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:32:52.388 12:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:52.647 12:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:52.647 12:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:52.909 12:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=f6635412-8b63-431a-857b-c1840e017470 00:32:52.909 12:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f6635412-8b63-431a-857b-c1840e017470 00:32:52.909 12:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:53.169 12:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:53.169 12:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:53.169 12:31:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f6635412-8b63-431a-857b-c1840e017470 lvol 150 00:32:53.169 12:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=1b4e91c5-2a67-48c0-ab4a-2d26c67289d3 00:32:53.169 12:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:32:53.169 12:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:53.430 [2024-11-25 12:31:49.221916] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:53.430 [2024-11-25 12:31:49.222066] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:53.430 true 00:32:53.430 12:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f6635412-8b63-431a-857b-c1840e017470 00:32:53.430 12:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:53.691 12:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:53.692 12:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:53.953 12:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1b4e91c5-2a67-48c0-ab4a-2d26c67289d3 00:32:54.214 12:31:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:32:54.214 [2024-11-25 12:31:50.042302] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:32:54.214 12:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:32:54.476 12:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=101720 00:32:54.476 12:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:54.476 12:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 101720 /var/tmp/bdevperf.sock 00:32:54.476 12:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 101720 ']' 00:32:54.476 12:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:54.476 12:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:54.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:54.476 12:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:54.476 12:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:54.476 12:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:54.476 12:31:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:54.476 [2024-11-25 12:31:50.313914] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:32:54.476 [2024-11-25 12:31:50.313976] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101720 ] 00:32:54.738 [2024-11-25 12:31:50.455091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:54.738 [2024-11-25 12:31:50.497135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:55.684 12:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:55.684 12:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:32:55.684 12:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:55.684 Nvme0n1 00:32:55.684 12:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:55.944 [ 00:32:55.945 { 00:32:55.945 "aliases": [ 00:32:55.945 "1b4e91c5-2a67-48c0-ab4a-2d26c67289d3" 00:32:55.945 ], 00:32:55.945 "assigned_rate_limits": { 00:32:55.945 "r_mbytes_per_sec": 0, 00:32:55.945 "rw_ios_per_sec": 0, 00:32:55.945 "rw_mbytes_per_sec": 0, 00:32:55.945 "w_mbytes_per_sec": 0 00:32:55.945 }, 00:32:55.945 "block_size": 4096, 00:32:55.945 "claimed": false, 00:32:55.945 "driver_specific": { 00:32:55.945 "mp_policy": "active_passive", 00:32:55.945 "nvme": [ 00:32:55.945 { 00:32:55.945 "ctrlr_data": { 00:32:55.945 "ana_reporting": false, 00:32:55.945 "cntlid": 1, 00:32:55.945 "firmware_revision": "25.01", 00:32:55.945 "model_number": "SPDK bdev Controller", 00:32:55.945 "multi_ctrlr": true, 00:32:55.945 "oacs": { 00:32:55.945 "firmware": 0, 00:32:55.945 "format": 0, 00:32:55.945 "ns_manage": 0, 00:32:55.945 "security": 0 00:32:55.945 }, 00:32:55.945 "serial_number": "SPDK0", 00:32:55.945 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:55.945 "vendor_id": "0x8086" 00:32:55.945 }, 00:32:55.945 "ns_data": { 00:32:55.945 "can_share": true, 00:32:55.945 "id": 1 00:32:55.945 }, 00:32:55.945 "trid": { 00:32:55.945 "adrfam": "IPv4", 00:32:55.945 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:55.945 "traddr": "10.0.0.3", 00:32:55.945 "trsvcid": "4420", 00:32:55.945 "trtype": "TCP" 00:32:55.945 }, 00:32:55.945 "vs": { 00:32:55.945 "nvme_version": "1.3" 00:32:55.945 } 00:32:55.945 } 00:32:55.945 ] 00:32:55.945 }, 00:32:55.945 "memory_domains": [ 00:32:55.945 { 00:32:55.945 "dma_device_id": "system", 00:32:55.945 "dma_device_type": 1 00:32:55.945 } 00:32:55.945 ], 00:32:55.945 "name": "Nvme0n1", 00:32:55.945 "num_blocks": 38912, 00:32:55.945 "numa_id": -1, 00:32:55.945 "product_name": "NVMe disk", 00:32:55.945 "supported_io_types": { 00:32:55.945 "abort": true, 00:32:55.945 "compare": true, 00:32:55.945 "compare_and_write": true, 00:32:55.945 "copy": true, 00:32:55.945 "flush": true, 00:32:55.945 "get_zone_info": false, 00:32:55.945 "nvme_admin": true, 00:32:55.945 "nvme_io": true, 00:32:55.945 "nvme_io_md": false, 00:32:55.945 "nvme_iov_md": false, 00:32:55.945 "read": true, 00:32:55.945 "reset": true, 00:32:55.945 "seek_data": false, 00:32:55.945 "seek_hole": false, 00:32:55.945 "unmap": true, 00:32:55.945 "write": true, 00:32:55.945 "write_zeroes": true, 00:32:55.945 "zcopy": false, 00:32:55.945 "zone_append": false, 00:32:55.945 "zone_management": false 00:32:55.945 }, 00:32:55.945 "uuid": "1b4e91c5-2a67-48c0-ab4a-2d26c67289d3", 00:32:55.945 "zoned": false 00:32:55.945 } 00:32:55.945 ] 00:32:55.945 12:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=101769 00:32:55.945 12:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:55.945 12:31:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:55.945 Running I/O for 10 seconds... 00:32:56.883 Latency(us) 00:32:56.883 [2024-11-25T12:31:52.759Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:56.883 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:56.883 Nvme0n1 : 1.00 10252.00 40.05 0.00 0.00 0.00 0.00 0.00 00:32:56.883 [2024-11-25T12:31:52.759Z] =================================================================================================================== 00:32:56.883 [2024-11-25T12:31:52.759Z] Total : 10252.00 40.05 0.00 0.00 0.00 0.00 0.00 00:32:56.883 00:32:57.912 12:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f6635412-8b63-431a-857b-c1840e017470 00:32:57.912 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:57.912 Nvme0n1 : 2.00 10677.00 41.71 0.00 0.00 0.00 0.00 0.00 00:32:57.912 [2024-11-25T12:31:53.788Z] =================================================================================================================== 00:32:57.912 [2024-11-25T12:31:53.788Z] Total : 10677.00 41.71 0.00 0.00 0.00 0.00 0.00 00:32:57.912 00:32:58.174 true 00:32:58.174 12:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f6635412-8b63-431a-857b-c1840e017470 00:32:58.174 12:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:58.434 12:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:58.434 12:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:58.434 12:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 101769 00:32:59.007 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:59.007 Nvme0n1 : 3.00 10900.33 42.58 0.00 0.00 0.00 0.00 0.00 00:32:59.007 [2024-11-25T12:31:54.883Z] =================================================================================================================== 00:32:59.007 [2024-11-25T12:31:54.883Z] Total : 10900.33 42.58 0.00 0.00 0.00 0.00 0.00 00:32:59.007 00:32:59.951 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:59.951 Nvme0n1 : 4.00 11498.75 44.92 0.00 0.00 0.00 0.00 0.00 00:32:59.951 [2024-11-25T12:31:55.827Z] =================================================================================================================== 00:32:59.951 [2024-11-25T12:31:55.827Z] Total : 11498.75 44.92 0.00 0.00 0.00 0.00 0.00 00:32:59.951 00:33:00.890 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:00.890 Nvme0n1 : 5.00 11891.20 46.45 0.00 0.00 0.00 0.00 0.00 00:33:00.890 [2024-11-25T12:31:56.766Z] =================================================================================================================== 00:33:00.890 [2024-11-25T12:31:56.766Z] Total : 11891.20 46.45 0.00 0.00 0.00 0.00 0.00 00:33:00.890 00:33:02.265 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:02.265 Nvme0n1 : 6.00 12028.00 46.98 0.00 0.00 0.00 0.00 0.00 00:33:02.265 [2024-11-25T12:31:58.141Z] =================================================================================================================== 00:33:02.265 [2024-11-25T12:31:58.141Z] Total : 12028.00 46.98 0.00 0.00 0.00 0.00 0.00 00:33:02.265 00:33:03.200 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:03.200 Nvme0n1 : 7.00 12159.86 47.50 0.00 0.00 0.00 0.00 0.00 00:33:03.200 [2024-11-25T12:31:59.076Z] =================================================================================================================== 00:33:03.200 [2024-11-25T12:31:59.076Z] Total : 12159.86 47.50 0.00 0.00 0.00 0.00 0.00 00:33:03.200 00:33:04.135 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:04.135 Nvme0n1 : 8.00 12230.12 47.77 0.00 0.00 0.00 0.00 0.00 00:33:04.135 [2024-11-25T12:32:00.011Z] =================================================================================================================== 00:33:04.135 [2024-11-25T12:32:00.011Z] Total : 12230.12 47.77 0.00 0.00 0.00 0.00 0.00 00:33:04.135 00:33:05.070 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:05.070 Nvme0n1 : 9.00 12264.89 47.91 0.00 0.00 0.00 0.00 0.00 00:33:05.070 [2024-11-25T12:32:00.946Z] =================================================================================================================== 00:33:05.070 [2024-11-25T12:32:00.946Z] Total : 12264.89 47.91 0.00 0.00 0.00 0.00 0.00 00:33:05.070 00:33:06.005 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:06.005 Nvme0n1 : 10.00 12305.70 48.07 0.00 0.00 0.00 0.00 0.00 00:33:06.005 [2024-11-25T12:32:01.881Z] =================================================================================================================== 00:33:06.005 [2024-11-25T12:32:01.881Z] Total : 12305.70 48.07 0.00 0.00 0.00 0.00 0.00 00:33:06.005 00:33:06.005 00:33:06.005 Latency(us) 00:33:06.005 [2024-11-25T12:32:01.881Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:06.005 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:06.005 Nvme0n1 : 10.00 12314.63 48.10 0.00 0.00 10390.70 4285.05 31860.58 00:33:06.005 [2024-11-25T12:32:01.881Z] =================================================================================================================== 00:33:06.005 [2024-11-25T12:32:01.881Z] Total : 12314.63 48.10 0.00 0.00 10390.70 4285.05 31860.58 00:33:06.005 { 00:33:06.005 "results": [ 00:33:06.005 { 00:33:06.005 "job": "Nvme0n1", 00:33:06.005 "core_mask": "0x2", 00:33:06.005 "workload": "randwrite", 00:33:06.005 "status": "finished", 00:33:06.005 "queue_depth": 128, 00:33:06.005 "io_size": 4096, 00:33:06.005 "runtime": 10.003144, 00:33:06.005 "iops": 12314.628280868495, 00:33:06.005 "mibps": 48.10401672214256, 00:33:06.005 "io_failed": 0, 00:33:06.005 "io_timeout": 0, 00:33:06.005 "avg_latency_us": 10390.704347544812, 00:33:06.005 "min_latency_us": 4285.046153846154, 00:33:06.005 "max_latency_us": 31860.578461538462 00:33:06.005 } 00:33:06.005 ], 00:33:06.005 "core_count": 1 00:33:06.005 } 00:33:06.005 12:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 101720 00:33:06.005 12:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 101720 ']' 00:33:06.005 12:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 101720 00:33:06.005 12:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:33:06.005 12:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:06.005 12:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101720 00:33:06.005 killing process with pid 101720 00:33:06.005 Received shutdown signal, test time was about 10.000000 seconds 00:33:06.005 00:33:06.005 Latency(us) 00:33:06.005 [2024-11-25T12:32:01.881Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:06.005 [2024-11-25T12:32:01.881Z] =================================================================================================================== 00:33:06.005 [2024-11-25T12:32:01.881Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:06.005 12:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:06.005 12:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:06.005 12:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101720' 00:33:06.005 12:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 101720 00:33:06.005 12:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 101720 00:33:06.264 12:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:33:06.264 12:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:06.522 12:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f6635412-8b63-431a-857b-c1840e017470 00:33:06.522 12:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:33:06.781 12:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:33:06.781 12:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:33:06.781 12:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:07.042 [2024-11-25 12:32:02.717900] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:33:07.042 12:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f6635412-8b63-431a-857b-c1840e017470 00:33:07.042 12:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:33:07.042 12:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f6635412-8b63-431a-857b-c1840e017470 00:33:07.042 12:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:07.042 12:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:07.042 12:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:07.042 12:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:07.042 12:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:07.042 12:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:07.042 12:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:07.042 12:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:33:07.042 12:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f6635412-8b63-431a-857b-c1840e017470 00:33:07.302 2024/11/25 12:32:02 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:f6635412-8b63-431a-857b-c1840e017470], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:33:07.302 request: 00:33:07.302 { 00:33:07.302 "method": "bdev_lvol_get_lvstores", 00:33:07.302 "params": { 00:33:07.302 "uuid": "f6635412-8b63-431a-857b-c1840e017470" 00:33:07.302 } 00:33:07.302 } 00:33:07.302 Got JSON-RPC error response 00:33:07.302 GoRPCClient: error on JSON-RPC call 00:33:07.302 12:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:33:07.302 12:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:07.302 12:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:07.302 12:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:07.302 12:32:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:07.302 aio_bdev 00:33:07.561 12:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1b4e91c5-2a67-48c0-ab4a-2d26c67289d3 00:33:07.561 12:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=1b4e91c5-2a67-48c0-ab4a-2d26c67289d3 00:33:07.561 12:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:07.561 12:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:33:07.561 12:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:07.561 12:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:07.561 12:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:07.561 12:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1b4e91c5-2a67-48c0-ab4a-2d26c67289d3 -t 2000 00:33:07.820 [ 00:33:07.820 { 00:33:07.820 "aliases": [ 00:33:07.820 "lvs/lvol" 00:33:07.820 ], 00:33:07.820 "assigned_rate_limits": { 00:33:07.820 "r_mbytes_per_sec": 0, 00:33:07.820 "rw_ios_per_sec": 0, 00:33:07.820 "rw_mbytes_per_sec": 0, 00:33:07.820 "w_mbytes_per_sec": 0 00:33:07.820 }, 00:33:07.820 "block_size": 4096, 00:33:07.820 "claimed": false, 00:33:07.820 "driver_specific": { 00:33:07.820 "lvol": { 00:33:07.820 "base_bdev": "aio_bdev", 00:33:07.820 "clone": false, 00:33:07.820 "esnap_clone": false, 00:33:07.820 "lvol_store_uuid": "f6635412-8b63-431a-857b-c1840e017470", 00:33:07.820 "num_allocated_clusters": 38, 00:33:07.820 "snapshot": false, 00:33:07.820 "thin_provision": false 00:33:07.820 } 00:33:07.820 }, 00:33:07.820 "name": "1b4e91c5-2a67-48c0-ab4a-2d26c67289d3", 00:33:07.820 "num_blocks": 38912, 00:33:07.820 "product_name": "Logical Volume", 00:33:07.820 "supported_io_types": { 00:33:07.820 "abort": false, 00:33:07.820 "compare": false, 00:33:07.820 "compare_and_write": false, 00:33:07.820 "copy": false, 00:33:07.820 "flush": false, 00:33:07.820 "get_zone_info": false, 00:33:07.820 "nvme_admin": false, 00:33:07.820 "nvme_io": false, 00:33:07.820 "nvme_io_md": false, 00:33:07.820 "nvme_iov_md": false, 00:33:07.820 "read": true, 00:33:07.820 "reset": true, 00:33:07.820 "seek_data": true, 00:33:07.820 "seek_hole": true, 00:33:07.820 "unmap": true, 00:33:07.820 "write": true, 00:33:07.820 "write_zeroes": true, 00:33:07.820 "zcopy": false, 00:33:07.820 "zone_append": false, 00:33:07.820 "zone_management": false 00:33:07.820 }, 00:33:07.820 "uuid": "1b4e91c5-2a67-48c0-ab4a-2d26c67289d3", 00:33:07.820 "zoned": false 00:33:07.820 } 00:33:07.820 ] 00:33:07.820 12:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:33:07.820 12:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:33:07.820 12:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f6635412-8b63-431a-857b-c1840e017470 00:33:08.079 12:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:33:08.079 12:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f6635412-8b63-431a-857b-c1840e017470 00:33:08.079 12:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:33:08.337 12:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:33:08.338 12:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 1b4e91c5-2a67-48c0-ab4a-2d26c67289d3 00:33:08.597 12:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f6635412-8b63-431a-857b-c1840e017470 00:33:08.855 12:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:08.855 12:32:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:33:09.425 00:33:09.425 real 0m16.932s 00:33:09.425 user 0m16.278s 00:33:09.425 sys 0m1.674s 00:33:09.425 12:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:09.425 ************************************ 00:33:09.425 END TEST lvs_grow_clean 00:33:09.425 12:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:33:09.425 ************************************ 00:33:09.425 12:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:33:09.425 12:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:09.425 12:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:09.425 12:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:09.425 ************************************ 00:33:09.425 START TEST lvs_grow_dirty 00:33:09.425 ************************************ 00:33:09.425 12:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:33:09.425 12:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:33:09.425 12:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:33:09.425 12:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:33:09.425 12:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:33:09.425 12:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:33:09.425 12:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:33:09.425 12:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:33:09.425 12:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:33:09.425 12:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:09.684 12:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:33:09.684 12:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:33:09.943 12:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=0ff7ddb6-f954-41fa-be7e-ec292263b2dc 00:33:09.943 12:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0ff7ddb6-f954-41fa-be7e-ec292263b2dc 00:33:09.943 12:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:33:10.203 12:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:33:10.203 12:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:33:10.203 12:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0ff7ddb6-f954-41fa-be7e-ec292263b2dc lvol 150 00:33:10.203 12:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=f5a5069d-36b8-4bca-aeb2-d0c0b34e1f6a 00:33:10.203 12:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:33:10.203 12:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:33:10.461 [2024-11-25 12:32:06.217826] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:33:10.461 [2024-11-25 12:32:06.217952] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:33:10.461 true 00:33:10.461 12:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0ff7ddb6-f954-41fa-be7e-ec292263b2dc 00:33:10.461 12:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:33:10.720 12:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:33:10.720 12:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:10.997 12:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f5a5069d-36b8-4bca-aeb2-d0c0b34e1f6a 00:33:11.295 12:32:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:33:11.295 [2024-11-25 12:32:07.050195] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:33:11.295 12:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:33:11.569 12:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=102143 00:33:11.569 12:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:11.569 12:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 102143 /var/tmp/bdevperf.sock 00:33:11.569 12:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 102143 ']' 00:33:11.569 12:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:11.569 12:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:11.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:11.569 12:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:11.569 12:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:11.569 12:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:11.569 12:32:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:33:11.569 [2024-11-25 12:32:07.312714] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:33:11.569 [2024-11-25 12:32:07.312792] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102143 ] 00:33:11.826 [2024-11-25 12:32:07.447141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:11.826 [2024-11-25 12:32:07.488124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:12.391 12:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:12.391 12:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:33:12.391 12:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:33:12.650 Nvme0n1 00:33:12.650 12:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:33:12.650 [ 00:33:12.650 { 00:33:12.650 "aliases": [ 00:33:12.650 "f5a5069d-36b8-4bca-aeb2-d0c0b34e1f6a" 00:33:12.650 ], 00:33:12.650 "assigned_rate_limits": { 00:33:12.650 "r_mbytes_per_sec": 0, 00:33:12.650 "rw_ios_per_sec": 0, 00:33:12.650 "rw_mbytes_per_sec": 0, 00:33:12.650 "w_mbytes_per_sec": 0 00:33:12.650 }, 00:33:12.650 "block_size": 4096, 00:33:12.650 "claimed": false, 00:33:12.650 "driver_specific": { 00:33:12.650 "mp_policy": "active_passive", 00:33:12.650 "nvme": [ 00:33:12.650 { 00:33:12.650 "ctrlr_data": { 00:33:12.650 "ana_reporting": false, 00:33:12.650 "cntlid": 1, 00:33:12.650 "firmware_revision": "25.01", 00:33:12.650 "model_number": "SPDK bdev Controller", 00:33:12.650 "multi_ctrlr": true, 00:33:12.650 "oacs": { 00:33:12.650 "firmware": 0, 00:33:12.650 "format": 0, 00:33:12.650 "ns_manage": 0, 00:33:12.650 "security": 0 00:33:12.650 }, 00:33:12.650 "serial_number": "SPDK0", 00:33:12.650 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:12.650 "vendor_id": "0x8086" 00:33:12.650 }, 00:33:12.650 "ns_data": { 00:33:12.650 "can_share": true, 00:33:12.650 "id": 1 00:33:12.650 }, 00:33:12.650 "trid": { 00:33:12.650 "adrfam": "IPv4", 00:33:12.650 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:12.650 "traddr": "10.0.0.3", 00:33:12.650 "trsvcid": "4420", 00:33:12.650 "trtype": "TCP" 00:33:12.650 }, 00:33:12.650 "vs": { 00:33:12.650 "nvme_version": "1.3" 00:33:12.650 } 00:33:12.650 } 00:33:12.650 ] 00:33:12.650 }, 00:33:12.650 "memory_domains": [ 00:33:12.650 { 00:33:12.650 "dma_device_id": "system", 00:33:12.650 "dma_device_type": 1 00:33:12.650 } 00:33:12.650 ], 00:33:12.650 "name": "Nvme0n1", 00:33:12.650 "num_blocks": 38912, 00:33:12.650 "numa_id": -1, 00:33:12.650 "product_name": "NVMe disk", 00:33:12.650 "supported_io_types": { 00:33:12.650 "abort": true, 00:33:12.650 "compare": true, 00:33:12.650 "compare_and_write": true, 00:33:12.650 "copy": true, 00:33:12.650 "flush": true, 00:33:12.650 "get_zone_info": false, 00:33:12.650 "nvme_admin": true, 00:33:12.650 "nvme_io": true, 00:33:12.650 "nvme_io_md": false, 00:33:12.650 "nvme_iov_md": false, 00:33:12.650 "read": true, 00:33:12.650 "reset": true, 00:33:12.650 "seek_data": false, 00:33:12.650 "seek_hole": false, 00:33:12.650 "unmap": true, 00:33:12.650 "write": true, 00:33:12.650 "write_zeroes": true, 00:33:12.650 "zcopy": false, 00:33:12.650 "zone_append": false, 00:33:12.650 "zone_management": false 00:33:12.650 }, 00:33:12.650 "uuid": "f5a5069d-36b8-4bca-aeb2-d0c0b34e1f6a", 00:33:12.650 "zoned": false 00:33:12.650 } 00:33:12.650 ] 00:33:12.650 12:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:12.650 12:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=102185 00:33:12.650 12:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:33:12.908 Running I/O for 10 seconds... 00:33:13.841 Latency(us) 00:33:13.841 [2024-11-25T12:32:09.717Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:13.841 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:13.841 Nvme0n1 : 1.00 10886.00 42.52 0.00 0.00 0.00 0.00 0.00 00:33:13.841 [2024-11-25T12:32:09.717Z] =================================================================================================================== 00:33:13.841 [2024-11-25T12:32:09.717Z] Total : 10886.00 42.52 0.00 0.00 0.00 0.00 0.00 00:33:13.841 00:33:14.774 12:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0ff7ddb6-f954-41fa-be7e-ec292263b2dc 00:33:14.774 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:14.774 Nvme0n1 : 2.00 10858.50 42.42 0.00 0.00 0.00 0.00 0.00 00:33:14.774 [2024-11-25T12:32:10.650Z] =================================================================================================================== 00:33:14.774 [2024-11-25T12:32:10.650Z] Total : 10858.50 42.42 0.00 0.00 0.00 0.00 0.00 00:33:14.774 00:33:15.032 true 00:33:15.032 12:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0ff7ddb6-f954-41fa-be7e-ec292263b2dc 00:33:15.032 12:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:33:15.290 12:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:33:15.290 12:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:33:15.290 12:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 102185 00:33:15.856 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:15.856 Nvme0n1 : 3.00 10520.33 41.10 0.00 0.00 0.00 0.00 0.00 00:33:15.856 [2024-11-25T12:32:11.732Z] =================================================================================================================== 00:33:15.856 [2024-11-25T12:32:11.732Z] Total : 10520.33 41.10 0.00 0.00 0.00 0.00 0.00 00:33:15.856 00:33:16.791 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:16.791 Nvme0n1 : 4.00 10476.25 40.92 0.00 0.00 0.00 0.00 0.00 00:33:16.791 [2024-11-25T12:32:12.667Z] =================================================================================================================== 00:33:16.791 [2024-11-25T12:32:12.667Z] Total : 10476.25 40.92 0.00 0.00 0.00 0.00 0.00 00:33:16.791 00:33:17.723 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:17.723 Nvme0n1 : 5.00 10859.60 42.42 0.00 0.00 0.00 0.00 0.00 00:33:17.723 [2024-11-25T12:32:13.599Z] =================================================================================================================== 00:33:17.723 [2024-11-25T12:32:13.599Z] Total : 10859.60 42.42 0.00 0.00 0.00 0.00 0.00 00:33:17.723 00:33:19.093 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:19.093 Nvme0n1 : 6.00 11244.50 43.92 0.00 0.00 0.00 0.00 0.00 00:33:19.093 [2024-11-25T12:32:14.969Z] =================================================================================================================== 00:33:19.093 [2024-11-25T12:32:14.969Z] Total : 11244.50 43.92 0.00 0.00 0.00 0.00 0.00 00:33:19.093 00:33:20.026 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:20.026 Nvme0n1 : 7.00 11517.43 44.99 0.00 0.00 0.00 0.00 0.00 00:33:20.026 [2024-11-25T12:32:15.902Z] =================================================================================================================== 00:33:20.026 [2024-11-25T12:32:15.902Z] Total : 11517.43 44.99 0.00 0.00 0.00 0.00 0.00 00:33:20.026 00:33:20.959 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:20.959 Nvme0n1 : 8.00 11724.50 45.80 0.00 0.00 0.00 0.00 0.00 00:33:20.959 [2024-11-25T12:32:16.835Z] =================================================================================================================== 00:33:20.959 [2024-11-25T12:32:16.835Z] Total : 11724.50 45.80 0.00 0.00 0.00 0.00 0.00 00:33:20.959 00:33:21.892 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:21.892 Nvme0n1 : 9.00 11258.78 43.98 0.00 0.00 0.00 0.00 0.00 00:33:21.892 [2024-11-25T12:32:17.768Z] =================================================================================================================== 00:33:21.892 [2024-11-25T12:32:17.768Z] Total : 11258.78 43.98 0.00 0.00 0.00 0.00 0.00 00:33:21.892 00:33:22.824 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:22.824 Nvme0n1 : 10.00 10623.10 41.50 0.00 0.00 0.00 0.00 0.00 00:33:22.824 [2024-11-25T12:32:18.700Z] =================================================================================================================== 00:33:22.824 [2024-11-25T12:32:18.700Z] Total : 10623.10 41.50 0.00 0.00 0.00 0.00 0.00 00:33:22.824 00:33:22.824 00:33:22.824 Latency(us) 00:33:22.824 [2024-11-25T12:32:18.700Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:22.824 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:22.824 Nvme0n1 : 10.01 10630.53 41.53 0.00 0.00 12038.02 4688.34 864671.90 00:33:22.824 [2024-11-25T12:32:18.700Z] =================================================================================================================== 00:33:22.824 [2024-11-25T12:32:18.700Z] Total : 10630.53 41.53 0.00 0.00 12038.02 4688.34 864671.90 00:33:22.824 { 00:33:22.824 "results": [ 00:33:22.824 { 00:33:22.824 "job": "Nvme0n1", 00:33:22.824 "core_mask": "0x2", 00:33:22.824 "workload": "randwrite", 00:33:22.824 "status": "finished", 00:33:22.824 "queue_depth": 128, 00:33:22.824 "io_size": 4096, 00:33:22.824 "runtime": 10.005055, 00:33:22.824 "iops": 10630.526268971034, 00:33:22.824 "mibps": 41.5254932381681, 00:33:22.824 "io_failed": 0, 00:33:22.824 "io_timeout": 0, 00:33:22.824 "avg_latency_us": 12038.01521598476, 00:33:22.824 "min_latency_us": 4688.344615384615, 00:33:22.824 "max_latency_us": 864671.9015384616 00:33:22.824 } 00:33:22.824 ], 00:33:22.824 "core_count": 1 00:33:22.824 } 00:33:22.824 12:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 102143 00:33:22.824 12:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 102143 ']' 00:33:22.824 12:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 102143 00:33:22.824 12:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:33:22.824 12:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:22.824 12:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102143 00:33:22.824 12:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:22.824 killing process with pid 102143 00:33:22.824 12:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:22.824 12:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102143' 00:33:22.824 Received shutdown signal, test time was about 10.000000 seconds 00:33:22.824 00:33:22.824 Latency(us) 00:33:22.824 [2024-11-25T12:32:18.700Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:22.824 [2024-11-25T12:32:18.700Z] =================================================================================================================== 00:33:22.824 [2024-11-25T12:32:18.700Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:22.824 12:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 102143 00:33:22.824 12:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 102143 00:33:23.082 12:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:33:23.082 12:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:23.340 12:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:33:23.340 12:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0ff7ddb6-f954-41fa-be7e-ec292263b2dc 00:33:23.597 12:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:33:23.597 12:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:33:23.597 12:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 101567 00:33:23.597 12:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 101567 00:33:23.597 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 101567 Killed "${NVMF_APP[@]}" "$@" 00:33:23.597 12:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:33:23.597 12:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:33:23.597 12:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:23.597 12:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:23.597 12:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:23.597 12:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=102348 00:33:23.597 12:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 102348 00:33:23.597 12:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 102348 ']' 00:33:23.597 12:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:23.597 12:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:33:23.597 12:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:23.597 12:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:23.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:23.597 12:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:23.597 12:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:23.597 [2024-11-25 12:32:19.369032] thread.c:3055:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:23.597 [2024-11-25 12:32:19.369806] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:33:23.597 [2024-11-25 12:32:19.369856] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:23.855 [2024-11-25 12:32:19.511351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:23.855 [2024-11-25 12:32:19.546379] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:23.855 [2024-11-25 12:32:19.546420] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:23.855 [2024-11-25 12:32:19.546426] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:23.855 [2024-11-25 12:32:19.546431] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:23.855 [2024-11-25 12:32:19.546436] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:23.855 [2024-11-25 12:32:19.546673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:23.855 [2024-11-25 12:32:19.599240] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:23.855 [2024-11-25 12:32:19.599492] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:24.421 12:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:24.421 12:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:33:24.421 12:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:24.421 12:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:24.421 12:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:24.421 12:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:24.421 12:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:24.678 [2024-11-25 12:32:20.463919] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:33:24.678 [2024-11-25 12:32:20.464541] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:33:24.678 [2024-11-25 12:32:20.464821] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:33:24.678 12:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:33:24.678 12:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev f5a5069d-36b8-4bca-aeb2-d0c0b34e1f6a 00:33:24.678 12:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=f5a5069d-36b8-4bca-aeb2-d0c0b34e1f6a 00:33:24.678 12:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:24.678 12:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:33:24.678 12:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:24.678 12:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:24.678 12:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:24.936 12:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f5a5069d-36b8-4bca-aeb2-d0c0b34e1f6a -t 2000 00:33:25.194 [ 00:33:25.194 { 00:33:25.194 "aliases": [ 00:33:25.194 "lvs/lvol" 00:33:25.194 ], 00:33:25.194 "assigned_rate_limits": { 00:33:25.194 "r_mbytes_per_sec": 0, 00:33:25.194 "rw_ios_per_sec": 0, 00:33:25.194 "rw_mbytes_per_sec": 0, 00:33:25.194 "w_mbytes_per_sec": 0 00:33:25.194 }, 00:33:25.194 "block_size": 4096, 00:33:25.194 "claimed": false, 00:33:25.194 "driver_specific": { 00:33:25.194 "lvol": { 00:33:25.194 "base_bdev": "aio_bdev", 00:33:25.194 "clone": false, 00:33:25.194 "esnap_clone": false, 00:33:25.194 "lvol_store_uuid": "0ff7ddb6-f954-41fa-be7e-ec292263b2dc", 00:33:25.194 "num_allocated_clusters": 38, 00:33:25.194 "snapshot": false, 00:33:25.194 "thin_provision": false 00:33:25.194 } 00:33:25.194 }, 00:33:25.194 "name": "f5a5069d-36b8-4bca-aeb2-d0c0b34e1f6a", 00:33:25.194 "num_blocks": 38912, 00:33:25.194 "product_name": "Logical Volume", 00:33:25.194 "supported_io_types": { 00:33:25.194 "abort": false, 00:33:25.194 "compare": false, 00:33:25.194 "compare_and_write": false, 00:33:25.194 "copy": false, 00:33:25.194 "flush": false, 00:33:25.194 "get_zone_info": false, 00:33:25.194 "nvme_admin": false, 00:33:25.194 "nvme_io": false, 00:33:25.194 "nvme_io_md": false, 00:33:25.194 "nvme_iov_md": false, 00:33:25.194 "read": true, 00:33:25.194 "reset": true, 00:33:25.194 "seek_data": true, 00:33:25.194 "seek_hole": true, 00:33:25.194 "unmap": true, 00:33:25.194 "write": true, 00:33:25.194 "write_zeroes": true, 00:33:25.194 "zcopy": false, 00:33:25.194 "zone_append": false, 00:33:25.194 "zone_management": false 00:33:25.194 }, 00:33:25.194 "uuid": "f5a5069d-36b8-4bca-aeb2-d0c0b34e1f6a", 00:33:25.194 "zoned": false 00:33:25.194 } 00:33:25.194 ] 00:33:25.194 12:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:33:25.194 12:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0ff7ddb6-f954-41fa-be7e-ec292263b2dc 00:33:25.194 12:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:33:25.451 12:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:33:25.451 12:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:33:25.451 12:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0ff7ddb6-f954-41fa-be7e-ec292263b2dc 00:33:25.709 12:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:33:25.709 12:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:25.967 [2024-11-25 12:32:21.607198] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:33:25.968 12:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0ff7ddb6-f954-41fa-be7e-ec292263b2dc 00:33:25.968 12:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:33:25.968 12:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0ff7ddb6-f954-41fa-be7e-ec292263b2dc 00:33:25.968 12:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:25.968 12:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:25.968 12:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:25.968 12:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:25.968 12:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:25.968 12:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:25.968 12:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:25.968 12:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:33:25.968 12:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0ff7ddb6-f954-41fa-be7e-ec292263b2dc 00:33:26.225 2024/11/25 12:32:21 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:0ff7ddb6-f954-41fa-be7e-ec292263b2dc], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:33:26.225 request: 00:33:26.225 { 00:33:26.225 "method": "bdev_lvol_get_lvstores", 00:33:26.225 "params": { 00:33:26.225 "uuid": "0ff7ddb6-f954-41fa-be7e-ec292263b2dc" 00:33:26.225 } 00:33:26.225 } 00:33:26.225 Got JSON-RPC error response 00:33:26.225 GoRPCClient: error on JSON-RPC call 00:33:26.225 12:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:33:26.225 12:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:26.225 12:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:26.225 12:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:26.225 12:32:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:26.225 aio_bdev 00:33:26.225 12:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f5a5069d-36b8-4bca-aeb2-d0c0b34e1f6a 00:33:26.225 12:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=f5a5069d-36b8-4bca-aeb2-d0c0b34e1f6a 00:33:26.225 12:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:26.225 12:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:33:26.225 12:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:26.225 12:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:26.225 12:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:26.482 12:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f5a5069d-36b8-4bca-aeb2-d0c0b34e1f6a -t 2000 00:33:26.740 [ 00:33:26.740 { 00:33:26.740 "aliases": [ 00:33:26.740 "lvs/lvol" 00:33:26.740 ], 00:33:26.740 "assigned_rate_limits": { 00:33:26.740 "r_mbytes_per_sec": 0, 00:33:26.740 "rw_ios_per_sec": 0, 00:33:26.740 "rw_mbytes_per_sec": 0, 00:33:26.740 "w_mbytes_per_sec": 0 00:33:26.740 }, 00:33:26.740 "block_size": 4096, 00:33:26.740 "claimed": false, 00:33:26.740 "driver_specific": { 00:33:26.740 "lvol": { 00:33:26.740 "base_bdev": "aio_bdev", 00:33:26.740 "clone": false, 00:33:26.740 "esnap_clone": false, 00:33:26.740 "lvol_store_uuid": "0ff7ddb6-f954-41fa-be7e-ec292263b2dc", 00:33:26.740 "num_allocated_clusters": 38, 00:33:26.740 "snapshot": false, 00:33:26.740 "thin_provision": false 00:33:26.740 } 00:33:26.740 }, 00:33:26.740 "name": "f5a5069d-36b8-4bca-aeb2-d0c0b34e1f6a", 00:33:26.740 "num_blocks": 38912, 00:33:26.740 "product_name": "Logical Volume", 00:33:26.740 "supported_io_types": { 00:33:26.740 "abort": false, 00:33:26.740 "compare": false, 00:33:26.740 "compare_and_write": false, 00:33:26.740 "copy": false, 00:33:26.740 "flush": false, 00:33:26.740 "get_zone_info": false, 00:33:26.740 "nvme_admin": false, 00:33:26.740 "nvme_io": false, 00:33:26.740 "nvme_io_md": false, 00:33:26.740 "nvme_iov_md": false, 00:33:26.740 "read": true, 00:33:26.740 "reset": true, 00:33:26.740 "seek_data": true, 00:33:26.740 "seek_hole": true, 00:33:26.740 "unmap": true, 00:33:26.740 "write": true, 00:33:26.740 "write_zeroes": true, 00:33:26.740 "zcopy": false, 00:33:26.740 "zone_append": false, 00:33:26.740 "zone_management": false 00:33:26.740 }, 00:33:26.740 "uuid": "f5a5069d-36b8-4bca-aeb2-d0c0b34e1f6a", 00:33:26.740 "zoned": false 00:33:26.740 } 00:33:26.740 ] 00:33:26.740 12:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:33:26.740 12:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:33:26.740 12:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0ff7ddb6-f954-41fa-be7e-ec292263b2dc 00:33:26.998 12:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:33:26.998 12:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0ff7ddb6-f954-41fa-be7e-ec292263b2dc 00:33:26.998 12:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:33:27.255 12:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:33:27.255 12:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete f5a5069d-36b8-4bca-aeb2-d0c0b34e1f6a 00:33:27.255 12:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0ff7ddb6-f954-41fa-be7e-ec292263b2dc 00:33:27.513 12:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:27.771 12:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:33:28.028 00:33:28.028 real 0m18.710s 00:33:28.028 user 0m26.574s 00:33:28.028 sys 0m5.807s 00:33:28.028 12:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:28.028 12:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:28.028 ************************************ 00:33:28.028 END TEST lvs_grow_dirty 00:33:28.028 ************************************ 00:33:28.028 12:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:33:28.028 12:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:33:28.028 12:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:33:28.028 12:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:33:28.028 12:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:33:28.028 12:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:33:28.028 12:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:33:28.028 12:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:33:28.028 12:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:33:28.028 nvmf_trace.0 00:33:28.285 12:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:33:28.285 12:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:33:28.285 12:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:28.285 12:32:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:33:30.194 12:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:30.194 12:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:33:30.194 12:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:30.194 12:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:30.194 rmmod nvme_tcp 00:33:30.194 rmmod nvme_fabrics 00:33:30.194 rmmod nvme_keyring 00:33:30.194 12:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:30.194 12:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:33:30.194 12:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:33:30.195 12:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 102348 ']' 00:33:30.195 12:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 102348 00:33:30.195 12:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 102348 ']' 00:33:30.195 12:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 102348 00:33:30.195 12:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:33:30.195 12:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:30.195 12:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102348 00:33:30.195 12:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:30.195 12:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:30.195 12:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102348' 00:33:30.195 killing process with pid 102348 00:33:30.195 12:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 102348 00:33:30.195 12:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 102348 00:33:30.195 12:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:30.195 12:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:30.195 12:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:30.195 12:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:33:30.195 12:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:33:30.195 12:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:30.195 12:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:33:30.195 12:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:30.195 12:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:33:30.195 12:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:33:30.195 12:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:33:30.195 12:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:33:30.195 12:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:33:30.195 12:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:33:30.195 12:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:33:30.195 12:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:33:30.195 12:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:33:30.195 12:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:33:30.195 12:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:33:30.195 12:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:33:30.195 12:32:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:30.195 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:30.195 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:33:30.195 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:30.195 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:30.195 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:30.195 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:33:30.195 00:33:30.195 real 0m39.505s 00:33:30.196 user 0m43.872s 00:33:30.196 sys 0m9.726s 00:33:30.196 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:30.196 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:30.196 ************************************ 00:33:30.196 END TEST nvmf_lvs_grow 00:33:30.196 ************************************ 00:33:30.454 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:33:30.454 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:30.454 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:30.454 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:30.454 ************************************ 00:33:30.454 START TEST nvmf_bdev_io_wait 00:33:30.454 ************************************ 00:33:30.454 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:33:30.454 * Looking for test storage... 00:33:30.454 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:30.454 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:30.454 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:30.454 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:33:30.454 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:30.454 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:30.454 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:30.454 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:30.454 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:33:30.454 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:33:30.454 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:33:30.454 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:33:30.454 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:33:30.454 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:33:30.454 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:33:30.454 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:30.454 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:33:30.454 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:33:30.454 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:30.454 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:30.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:30.455 --rc genhtml_branch_coverage=1 00:33:30.455 --rc genhtml_function_coverage=1 00:33:30.455 --rc genhtml_legend=1 00:33:30.455 --rc geninfo_all_blocks=1 00:33:30.455 --rc geninfo_unexecuted_blocks=1 00:33:30.455 00:33:30.455 ' 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:30.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:30.455 --rc genhtml_branch_coverage=1 00:33:30.455 --rc genhtml_function_coverage=1 00:33:30.455 --rc genhtml_legend=1 00:33:30.455 --rc geninfo_all_blocks=1 00:33:30.455 --rc geninfo_unexecuted_blocks=1 00:33:30.455 00:33:30.455 ' 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:30.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:30.455 --rc genhtml_branch_coverage=1 00:33:30.455 --rc genhtml_function_coverage=1 00:33:30.455 --rc genhtml_legend=1 00:33:30.455 --rc geninfo_all_blocks=1 00:33:30.455 --rc geninfo_unexecuted_blocks=1 00:33:30.455 00:33:30.455 ' 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:30.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:30.455 --rc genhtml_branch_coverage=1 00:33:30.455 --rc genhtml_function_coverage=1 00:33:30.455 --rc genhtml_legend=1 00:33:30.455 --rc geninfo_all_blocks=1 00:33:30.455 --rc geninfo_unexecuted_blocks=1 00:33:30.455 00:33:30.455 ' 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:33:30.455 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:33:30.456 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:33:30.456 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:33:30.456 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:33:30.456 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:33:30.456 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:30.456 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:33:30.456 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:33:30.456 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:33:30.456 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:30.456 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:33:30.456 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:30.456 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:33:30.456 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:30.456 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:33:30.456 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:30.456 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:30.456 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:30.456 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:30.456 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:30.456 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:30.456 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:33:30.456 Cannot find device "nvmf_init_br" 00:33:30.456 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:33:30.456 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:33:30.456 Cannot find device "nvmf_init_br2" 00:33:30.456 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:33:30.456 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:33:30.456 Cannot find device "nvmf_tgt_br" 00:33:30.456 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:33:30.456 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:33:30.456 Cannot find device "nvmf_tgt_br2" 00:33:30.456 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:33:30.456 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:33:30.456 Cannot find device "nvmf_init_br" 00:33:30.456 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:33:30.456 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:33:30.456 Cannot find device "nvmf_init_br2" 00:33:30.456 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:33:30.456 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:33:30.456 Cannot find device "nvmf_tgt_br" 00:33:30.456 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:33:30.456 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:33:30.456 Cannot find device "nvmf_tgt_br2" 00:33:30.456 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:33:30.456 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:33:30.714 Cannot find device "nvmf_br" 00:33:30.714 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:33:30.714 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:33:30.714 Cannot find device "nvmf_init_if" 00:33:30.714 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:33:30.714 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:33:30.714 Cannot find device "nvmf_init_if2" 00:33:30.714 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:33:30.714 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:30.714 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:30.714 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:33:30.714 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:30.714 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:30.714 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:33:30.714 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:33:30.714 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:30.714 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:33:30.714 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:30.714 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:30.714 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:30.714 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:30.714 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:30.714 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:33:30.715 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:33:30.715 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:33:30.715 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:33:30.715 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:33:30.715 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:33:30.715 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:33:30.715 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:33:30.715 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:33:30.715 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:30.715 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:30.715 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:30.715 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:33:30.715 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:33:30.715 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:33:30.715 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:33:30.715 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:30.715 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:30.715 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:30.715 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:33:30.715 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:33:30.715 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:33:30.715 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:30.715 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:33:30.715 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:33:30.715 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:30.715 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:33:30.715 00:33:30.715 --- 10.0.0.3 ping statistics --- 00:33:30.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:30.715 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:33:30.715 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:33:30.715 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:33:30.715 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:33:30.715 00:33:30.715 --- 10.0.0.4 ping statistics --- 00:33:30.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:30.715 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:33:30.715 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:30.715 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:30.715 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.014 ms 00:33:30.715 00:33:30.715 --- 10.0.0.1 ping statistics --- 00:33:30.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:30.715 rtt min/avg/max/mdev = 0.014/0.014/0.014/0.000 ms 00:33:30.715 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:33:30.715 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:30.715 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:33:30.715 00:33:30.715 --- 10.0.0.2 ping statistics --- 00:33:30.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:30.715 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:33:30.715 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:30.715 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:33:30.715 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:30.715 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:30.715 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:30.715 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:30.715 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:30.715 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:30.715 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:30.715 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:33:30.715 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:30.715 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:30.715 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:30.715 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=102824 00:33:30.715 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 102824 00:33:30.715 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 102824 ']' 00:33:30.715 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:30.715 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:30.715 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:30.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:30.715 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:30.715 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:30.715 12:32:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:33:30.715 [2024-11-25 12:32:26.588418] thread.c:3055:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:30.715 [2024-11-25 12:32:26.589241] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:33:30.715 [2024-11-25 12:32:26.589285] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:30.973 [2024-11-25 12:32:26.731597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:30.973 [2024-11-25 12:32:26.775564] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:30.973 [2024-11-25 12:32:26.775599] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:30.973 [2024-11-25 12:32:26.775606] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:30.973 [2024-11-25 12:32:26.775611] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:30.973 [2024-11-25 12:32:26.775616] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:30.973 [2024-11-25 12:32:26.776567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:30.973 [2024-11-25 12:32:26.776894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:30.973 [2024-11-25 12:32:26.776967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:30.973 [2024-11-25 12:32:26.776977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:30.973 [2024-11-25 12:32:26.777363] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:31.906 [2024-11-25 12:32:27.545433] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:31.906 [2024-11-25 12:32:27.545954] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:31.906 [2024-11-25 12:32:27.546158] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:31.906 [2024-11-25 12:32:27.547044] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:31.906 [2024-11-25 12:32:27.557849] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:31.906 Malloc0 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:31.906 [2024-11-25 12:32:27.613925] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=102877 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=102879 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=102881 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:31.906 { 00:33:31.906 "params": { 00:33:31.906 "name": "Nvme$subsystem", 00:33:31.906 "trtype": "$TEST_TRANSPORT", 00:33:31.906 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:31.906 "adrfam": "ipv4", 00:33:31.906 "trsvcid": "$NVMF_PORT", 00:33:31.906 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:31.906 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:31.906 "hdgst": ${hdgst:-false}, 00:33:31.906 "ddgst": ${ddgst:-false} 00:33:31.906 }, 00:33:31.906 "method": "bdev_nvme_attach_controller" 00:33:31.906 } 00:33:31.906 EOF 00:33:31.906 )") 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:31.906 { 00:33:31.906 "params": { 00:33:31.906 "name": "Nvme$subsystem", 00:33:31.906 "trtype": "$TEST_TRANSPORT", 00:33:31.906 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:31.906 "adrfam": "ipv4", 00:33:31.906 "trsvcid": "$NVMF_PORT", 00:33:31.906 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:31.906 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:31.906 "hdgst": ${hdgst:-false}, 00:33:31.906 "ddgst": ${ddgst:-false} 00:33:31.906 }, 00:33:31.906 "method": "bdev_nvme_attach_controller" 00:33:31.906 } 00:33:31.906 EOF 00:33:31.906 )") 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=102883 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:31.906 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:31.906 { 00:33:31.906 "params": { 00:33:31.906 "name": "Nvme$subsystem", 00:33:31.906 "trtype": "$TEST_TRANSPORT", 00:33:31.906 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:31.906 "adrfam": "ipv4", 00:33:31.906 "trsvcid": "$NVMF_PORT", 00:33:31.907 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:31.907 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:31.907 "hdgst": ${hdgst:-false}, 00:33:31.907 "ddgst": ${ddgst:-false} 00:33:31.907 }, 00:33:31.907 "method": "bdev_nvme_attach_controller" 00:33:31.907 } 00:33:31.907 EOF 00:33:31.907 )") 00:33:31.907 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:31.907 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:31.907 "params": { 00:33:31.907 "name": "Nvme1", 00:33:31.907 "trtype": "tcp", 00:33:31.907 "traddr": "10.0.0.3", 00:33:31.907 "adrfam": "ipv4", 00:33:31.907 "trsvcid": "4420", 00:33:31.907 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:31.907 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:31.907 "hdgst": false, 00:33:31.907 "ddgst": false 00:33:31.907 }, 00:33:31.907 "method": "bdev_nvme_attach_controller" 00:33:31.907 }' 00:33:31.907 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:31.907 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:33:31.907 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:31.907 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:31.907 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:31.907 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:31.907 { 00:33:31.907 "params": { 00:33:31.907 "name": "Nvme$subsystem", 00:33:31.907 "trtype": "$TEST_TRANSPORT", 00:33:31.907 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:31.907 "adrfam": "ipv4", 00:33:31.907 "trsvcid": "$NVMF_PORT", 00:33:31.907 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:31.907 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:31.907 "hdgst": ${hdgst:-false}, 00:33:31.907 "ddgst": ${ddgst:-false} 00:33:31.907 }, 00:33:31.907 "method": "bdev_nvme_attach_controller" 00:33:31.907 } 00:33:31.907 EOF 00:33:31.907 )") 00:33:31.907 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:31.907 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:31.907 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:31.907 "params": { 00:33:31.907 "name": "Nvme1", 00:33:31.907 "trtype": "tcp", 00:33:31.907 "traddr": "10.0.0.3", 00:33:31.907 "adrfam": "ipv4", 00:33:31.907 "trsvcid": "4420", 00:33:31.907 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:31.907 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:31.907 "hdgst": false, 00:33:31.907 "ddgst": false 00:33:31.907 }, 00:33:31.907 "method": "bdev_nvme_attach_controller" 00:33:31.907 }' 00:33:31.907 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:31.907 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:31.907 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:31.907 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:31.907 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:31.907 "params": { 00:33:31.907 "name": "Nvme1", 00:33:31.907 "trtype": "tcp", 00:33:31.907 "traddr": "10.0.0.3", 00:33:31.907 "adrfam": "ipv4", 00:33:31.907 "trsvcid": "4420", 00:33:31.907 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:31.907 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:31.907 "hdgst": false, 00:33:31.907 "ddgst": false 00:33:31.907 }, 00:33:31.907 "method": "bdev_nvme_attach_controller" 00:33:31.907 }' 00:33:31.907 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:31.907 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:31.907 "params": { 00:33:31.907 "name": "Nvme1", 00:33:31.907 "trtype": "tcp", 00:33:31.907 "traddr": "10.0.0.3", 00:33:31.907 "adrfam": "ipv4", 00:33:31.907 "trsvcid": "4420", 00:33:31.907 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:31.907 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:31.907 "hdgst": false, 00:33:31.907 "ddgst": false 00:33:31.907 }, 00:33:31.907 "method": "bdev_nvme_attach_controller" 00:33:31.907 }' 00:33:31.907 [2024-11-25 12:32:27.653634] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:33:31.907 [2024-11-25 12:32:27.653689] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:33:31.907 [2024-11-25 12:32:27.655634] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:33:31.907 [2024-11-25 12:32:27.655802] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:33:31.907 [2024-11-25 12:32:27.657812] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:33:31.907 [2024-11-25 12:32:27.657863] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:33:31.907 [2024-11-25 12:32:27.669529] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:33:31.907 [2024-11-25 12:32:27.669579] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:33:31.907 12:32:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 102877 00:33:32.165 [2024-11-25 12:32:27.828797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:32.165 [2024-11-25 12:32:27.857511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:32.165 [2024-11-25 12:32:27.870779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:32.165 [2024-11-25 12:32:27.899927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:33:32.165 [2024-11-25 12:32:27.914651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:32.165 [2024-11-25 12:32:27.943500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:33:32.165 [2024-11-25 12:32:27.957415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:32.165 Running I/O for 1 seconds... 00:33:32.165 [2024-11-25 12:32:27.986921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:33:32.165 Running I/O for 1 seconds... 00:33:32.423 Running I/O for 1 seconds... 00:33:32.423 Running I/O for 1 seconds... 00:33:33.354 7461.00 IOPS, 29.14 MiB/s 00:33:33.354 Latency(us) 00:33:33.354 [2024-11-25T12:32:29.230Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:33.354 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:33:33.354 Nvme1n1 : 1.02 7463.03 29.15 0.00 0.00 17059.39 4310.25 27625.94 00:33:33.354 [2024-11-25T12:32:29.230Z] =================================================================================================================== 00:33:33.354 [2024-11-25T12:32:29.230Z] Total : 7463.03 29.15 0.00 0.00 17059.39 4310.25 27625.94 00:33:33.354 10630.00 IOPS, 41.52 MiB/s 00:33:33.354 Latency(us) 00:33:33.354 [2024-11-25T12:32:29.230Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:33.354 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:33:33.354 Nvme1n1 : 1.01 10686.61 41.74 0.00 0.00 11932.85 4915.20 21878.94 00:33:33.354 [2024-11-25T12:32:29.230Z] =================================================================================================================== 00:33:33.354 [2024-11-25T12:32:29.230Z] Total : 10686.61 41.74 0.00 0.00 11932.85 4915.20 21878.94 00:33:33.354 190936.00 IOPS, 745.84 MiB/s 00:33:33.354 Latency(us) 00:33:33.354 [2024-11-25T12:32:29.230Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:33.354 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:33:33.354 Nvme1n1 : 1.00 190583.80 744.47 0.00 0.00 667.72 286.72 1840.05 00:33:33.354 [2024-11-25T12:32:29.230Z] =================================================================================================================== 00:33:33.354 [2024-11-25T12:32:29.230Z] Total : 190583.80 744.47 0.00 0.00 667.72 286.72 1840.05 00:33:33.354 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 102879 00:33:33.354 7944.00 IOPS, 31.03 MiB/s 00:33:33.354 Latency(us) 00:33:33.354 [2024-11-25T12:32:29.230Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:33.354 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:33:33.354 Nvme1n1 : 1.01 8076.04 31.55 0.00 0.00 15812.95 3352.42 34280.37 00:33:33.354 [2024-11-25T12:32:29.230Z] =================================================================================================================== 00:33:33.354 [2024-11-25T12:32:29.230Z] Total : 8076.04 31.55 0.00 0.00 15812.95 3352.42 34280.37 00:33:33.354 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 102881 00:33:33.354 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 102883 00:33:33.354 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:33.354 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.354 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:33.354 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.354 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:33:33.354 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:33:33.354 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:33.354 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:33:33.611 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:33.611 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:33:33.611 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:33.611 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:33.611 rmmod nvme_tcp 00:33:33.611 rmmod nvme_fabrics 00:33:33.611 rmmod nvme_keyring 00:33:33.611 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:33.611 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:33:33.611 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:33:33.611 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 102824 ']' 00:33:33.611 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 102824 00:33:33.611 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 102824 ']' 00:33:33.611 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 102824 00:33:33.611 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:33:33.611 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:33.611 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102824 00:33:33.611 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:33.611 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:33.611 killing process with pid 102824 00:33:33.611 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102824' 00:33:33.611 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 102824 00:33:33.611 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 102824 00:33:33.611 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:33.611 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:33.611 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:33.611 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:33:33.611 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:33:33.611 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:33:33.611 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:33.611 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:33.611 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:33:33.611 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:33:33.611 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:33:33.611 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:33:33.869 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:33:33.869 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:33:33.869 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:33:33.869 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:33:33.869 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:33:33.869 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:33:33.869 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:33:33.869 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:33:33.869 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:33.869 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:33.869 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:33:33.869 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:33.869 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:33.869 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:33.869 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:33:33.869 00:33:33.869 real 0m3.602s 00:33:33.869 user 0m11.774s 00:33:33.869 sys 0m1.521s 00:33:33.869 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:33.869 ************************************ 00:33:33.869 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:33.869 END TEST nvmf_bdev_io_wait 00:33:33.869 ************************************ 00:33:33.869 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:33:33.869 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:33.869 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:33.869 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:33.869 ************************************ 00:33:33.869 START TEST nvmf_queue_depth 00:33:33.869 ************************************ 00:33:33.869 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:33:34.129 * Looking for test storage... 00:33:34.129 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:34.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:34.129 --rc genhtml_branch_coverage=1 00:33:34.129 --rc genhtml_function_coverage=1 00:33:34.129 --rc genhtml_legend=1 00:33:34.129 --rc geninfo_all_blocks=1 00:33:34.129 --rc geninfo_unexecuted_blocks=1 00:33:34.129 00:33:34.129 ' 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:34.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:34.129 --rc genhtml_branch_coverage=1 00:33:34.129 --rc genhtml_function_coverage=1 00:33:34.129 --rc genhtml_legend=1 00:33:34.129 --rc geninfo_all_blocks=1 00:33:34.129 --rc geninfo_unexecuted_blocks=1 00:33:34.129 00:33:34.129 ' 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:34.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:34.129 --rc genhtml_branch_coverage=1 00:33:34.129 --rc genhtml_function_coverage=1 00:33:34.129 --rc genhtml_legend=1 00:33:34.129 --rc geninfo_all_blocks=1 00:33:34.129 --rc geninfo_unexecuted_blocks=1 00:33:34.129 00:33:34.129 ' 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:34.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:34.129 --rc genhtml_branch_coverage=1 00:33:34.129 --rc genhtml_function_coverage=1 00:33:34.129 --rc genhtml_legend=1 00:33:34.129 --rc geninfo_all_blocks=1 00:33:34.129 --rc geninfo_unexecuted_blocks=1 00:33:34.129 00:33:34.129 ' 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:34.129 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:33:34.130 Cannot find device "nvmf_init_br" 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:33:34.130 Cannot find device "nvmf_init_br2" 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:33:34.130 Cannot find device "nvmf_tgt_br" 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:33:34.130 Cannot find device "nvmf_tgt_br2" 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:33:34.130 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:33:34.131 Cannot find device "nvmf_init_br" 00:33:34.131 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:33:34.131 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:33:34.131 Cannot find device "nvmf_init_br2" 00:33:34.131 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:33:34.131 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:33:34.131 Cannot find device "nvmf_tgt_br" 00:33:34.131 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:33:34.131 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:33:34.131 Cannot find device "nvmf_tgt_br2" 00:33:34.131 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:33:34.131 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:33:34.131 Cannot find device "nvmf_br" 00:33:34.131 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:33:34.131 12:32:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:33:34.389 Cannot find device "nvmf_init_if" 00:33:34.389 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:33:34.389 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:33:34.389 Cannot find device "nvmf_init_if2" 00:33:34.389 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:33:34.389 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:34.389 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:34.389 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:33:34.389 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:34.389 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:34.389 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:33:34.389 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:33:34.389 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:34.389 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:33:34.389 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:34.389 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:34.389 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:34.389 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:34.390 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:34.390 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:33:34.390 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:33:34.390 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:33:34.390 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:33:34.390 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:33:34.390 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:33:34.390 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:33:34.390 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:33:34.390 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:33:34.390 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:34.390 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:34.390 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:34.390 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:33:34.390 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:33:34.390 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:33:34.390 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:33:34.390 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:34.390 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:34.390 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:34.390 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:33:34.390 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:33:34.390 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:33:34.390 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:34.390 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:33:34.390 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:33:34.390 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:34.390 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:33:34.390 00:33:34.390 --- 10.0.0.3 ping statistics --- 00:33:34.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:34.390 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:33:34.390 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:33:34.390 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:33:34.390 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:33:34.390 00:33:34.390 --- 10.0.0.4 ping statistics --- 00:33:34.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:34.390 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:33:34.390 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:34.390 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:34.390 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:33:34.390 00:33:34.390 --- 10.0.0.1 ping statistics --- 00:33:34.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:34.390 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:33:34.390 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:33:34.390 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:34.390 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.035 ms 00:33:34.390 00:33:34.390 --- 10.0.0.2 ping statistics --- 00:33:34.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:34.390 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:33:34.390 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:34.390 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:33:34.390 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:34.390 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:34.390 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:34.390 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:34.390 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:34.390 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:34.390 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:34.390 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:33:34.390 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:34.390 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:34.390 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:34.390 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=103139 00:33:34.390 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 103139 00:33:34.390 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:34.390 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 103139 ']' 00:33:34.390 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:34.390 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:34.390 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:34.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:34.390 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:34.390 12:32:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:34.648 [2024-11-25 12:32:30.266964] thread.c:3055:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:34.648 [2024-11-25 12:32:30.267578] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:33:34.648 [2024-11-25 12:32:30.267615] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:34.648 [2024-11-25 12:32:30.404732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:34.648 [2024-11-25 12:32:30.445298] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:34.648 [2024-11-25 12:32:30.445332] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:34.648 [2024-11-25 12:32:30.445339] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:34.648 [2024-11-25 12:32:30.445344] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:34.648 [2024-11-25 12:32:30.445349] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:34.648 [2024-11-25 12:32:30.445610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:34.648 [2024-11-25 12:32:30.514365] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:34.648 [2024-11-25 12:32:30.514633] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:35.585 12:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:35.585 12:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:33:35.585 12:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:35.585 12:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:35.585 12:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:35.585 12:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:35.585 12:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:35.585 12:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.585 12:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:35.585 [2024-11-25 12:32:31.146232] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:35.585 12:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.585 12:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:35.585 12:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.585 12:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:35.585 Malloc0 00:33:35.585 12:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.585 12:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:35.585 12:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.585 12:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:35.585 12:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.585 12:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:35.585 12:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.585 12:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:35.585 12:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.585 12:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:33:35.585 12:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.585 12:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:35.585 [2024-11-25 12:32:31.190218] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:33:35.585 12:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.585 12:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=103189 00:33:35.585 12:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:35.585 12:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 103189 /var/tmp/bdevperf.sock 00:33:35.585 12:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:33:35.585 12:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 103189 ']' 00:33:35.585 12:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:35.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:35.585 12:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:35.585 12:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:35.585 12:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:35.585 12:32:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:35.585 [2024-11-25 12:32:31.228257] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:33:35.585 [2024-11-25 12:32:31.228308] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103189 ] 00:33:35.585 [2024-11-25 12:32:31.352058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:35.585 [2024-11-25 12:32:31.382409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:36.528 12:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:36.528 12:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:33:36.528 12:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:36.528 12:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.528 12:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:36.528 NVMe0n1 00:33:36.528 12:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.528 12:32:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:36.528 Running I/O for 10 seconds... 00:33:38.396 8903.00 IOPS, 34.78 MiB/s [2024-11-25T12:32:35.645Z] 9616.50 IOPS, 37.56 MiB/s [2024-11-25T12:32:36.577Z] 10460.67 IOPS, 40.86 MiB/s [2024-11-25T12:32:37.509Z] 10972.50 IOPS, 42.86 MiB/s [2024-11-25T12:32:38.443Z] 11276.00 IOPS, 44.05 MiB/s [2024-11-25T12:32:39.408Z] 11474.83 IOPS, 44.82 MiB/s [2024-11-25T12:32:40.338Z] 11610.43 IOPS, 45.35 MiB/s [2024-11-25T12:32:41.268Z] 11766.75 IOPS, 45.96 MiB/s [2024-11-25T12:32:42.641Z] 11887.89 IOPS, 46.44 MiB/s [2024-11-25T12:32:42.641Z] 12005.70 IOPS, 46.90 MiB/s 00:33:46.765 Latency(us) 00:33:46.765 [2024-11-25T12:32:42.641Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:46.765 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:33:46.765 Verification LBA range: start 0x0 length 0x4000 00:33:46.765 NVMe0n1 : 10.06 12034.49 47.01 0.00 0.00 84818.73 17341.83 73803.62 00:33:46.765 [2024-11-25T12:32:42.641Z] =================================================================================================================== 00:33:46.765 [2024-11-25T12:32:42.641Z] Total : 12034.49 47.01 0.00 0.00 84818.73 17341.83 73803.62 00:33:46.765 { 00:33:46.765 "results": [ 00:33:46.765 { 00:33:46.765 "job": "NVMe0n1", 00:33:46.765 "core_mask": "0x1", 00:33:46.765 "workload": "verify", 00:33:46.765 "status": "finished", 00:33:46.765 "verify_range": { 00:33:46.765 "start": 0, 00:33:46.765 "length": 16384 00:33:46.765 }, 00:33:46.765 "queue_depth": 1024, 00:33:46.765 "io_size": 4096, 00:33:46.765 "runtime": 10.061163, 00:33:46.765 "iops": 12034.493427847257, 00:33:46.765 "mibps": 47.00973995252835, 00:33:46.765 "io_failed": 0, 00:33:46.765 "io_timeout": 0, 00:33:46.765 "avg_latency_us": 84818.7253872646, 00:33:46.765 "min_latency_us": 17341.833846153848, 00:33:46.765 "max_latency_us": 73803.61846153846 00:33:46.765 } 00:33:46.765 ], 00:33:46.765 "core_count": 1 00:33:46.765 } 00:33:46.765 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 103189 00:33:46.765 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 103189 ']' 00:33:46.765 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 103189 00:33:46.765 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:33:46.765 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:46.765 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103189 00:33:46.765 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:46.765 killing process with pid 103189 00:33:46.765 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:46.765 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103189' 00:33:46.765 Received shutdown signal, test time was about 10.000000 seconds 00:33:46.765 00:33:46.765 Latency(us) 00:33:46.765 [2024-11-25T12:32:42.641Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:46.765 [2024-11-25T12:32:42.641Z] =================================================================================================================== 00:33:46.765 [2024-11-25T12:32:42.641Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:46.765 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 103189 00:33:46.765 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 103189 00:33:46.765 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:33:46.765 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:33:46.765 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:46.765 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:33:46.765 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:46.765 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:33:46.765 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:46.765 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:46.765 rmmod nvme_tcp 00:33:46.765 rmmod nvme_fabrics 00:33:46.765 rmmod nvme_keyring 00:33:46.765 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:46.765 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:33:46.765 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:33:46.765 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 103139 ']' 00:33:46.765 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 103139 00:33:46.765 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 103139 ']' 00:33:46.765 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 103139 00:33:46.765 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:33:46.765 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:46.765 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103139 00:33:46.765 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:46.765 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:46.765 killing process with pid 103139 00:33:46.765 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103139' 00:33:46.765 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 103139 00:33:46.765 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 103139 00:33:47.023 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:47.023 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:47.023 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:47.023 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:33:47.023 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:33:47.023 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:47.023 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:33:47.024 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:47.024 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:33:47.024 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:33:47.024 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:33:47.024 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:33:47.024 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:33:47.024 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:33:47.024 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:33:47.024 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:33:47.024 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:33:47.024 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:33:47.024 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:33:47.024 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:33:47.024 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:47.024 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:47.282 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:33:47.282 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:47.282 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:47.282 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:47.282 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:33:47.282 00:33:47.282 real 0m13.201s 00:33:47.282 user 0m22.018s 00:33:47.282 sys 0m1.843s 00:33:47.282 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:47.282 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:47.282 ************************************ 00:33:47.282 END TEST nvmf_queue_depth 00:33:47.282 ************************************ 00:33:47.282 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:47.282 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:47.282 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:47.282 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:47.282 ************************************ 00:33:47.282 START TEST nvmf_target_multipath 00:33:47.282 ************************************ 00:33:47.282 12:32:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:47.282 * Looking for test storage... 00:33:47.282 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:33:47.282 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:47.282 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:47.282 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:33:47.282 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:47.282 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:47.282 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:47.282 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:47.282 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:33:47.282 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:33:47.282 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:33:47.282 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:33:47.282 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:33:47.282 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:33:47.282 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:33:47.282 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:47.282 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:33:47.282 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:33:47.282 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:47.282 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:47.282 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:33:47.282 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:33:47.282 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:47.282 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:33:47.282 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:33:47.282 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:33:47.282 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:33:47.282 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:47.282 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:33:47.282 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:33:47.282 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:47.282 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:47.282 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:33:47.282 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:47.282 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:47.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:47.282 --rc genhtml_branch_coverage=1 00:33:47.282 --rc genhtml_function_coverage=1 00:33:47.282 --rc genhtml_legend=1 00:33:47.282 --rc geninfo_all_blocks=1 00:33:47.282 --rc geninfo_unexecuted_blocks=1 00:33:47.282 00:33:47.282 ' 00:33:47.282 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:47.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:47.282 --rc genhtml_branch_coverage=1 00:33:47.282 --rc genhtml_function_coverage=1 00:33:47.282 --rc genhtml_legend=1 00:33:47.282 --rc geninfo_all_blocks=1 00:33:47.282 --rc geninfo_unexecuted_blocks=1 00:33:47.282 00:33:47.282 ' 00:33:47.282 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:47.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:47.282 --rc genhtml_branch_coverage=1 00:33:47.282 --rc genhtml_function_coverage=1 00:33:47.282 --rc genhtml_legend=1 00:33:47.282 --rc geninfo_all_blocks=1 00:33:47.282 --rc geninfo_unexecuted_blocks=1 00:33:47.282 00:33:47.282 ' 00:33:47.282 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:47.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:47.282 --rc genhtml_branch_coverage=1 00:33:47.282 --rc genhtml_function_coverage=1 00:33:47.282 --rc genhtml_legend=1 00:33:47.282 --rc geninfo_all_blocks=1 00:33:47.282 --rc geninfo_unexecuted_blocks=1 00:33:47.282 00:33:47.282 ' 00:33:47.282 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:47.282 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:33:47.282 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:47.282 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:47.282 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:47.282 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:47.282 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:47.282 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:47.282 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:33:47.283 Cannot find device "nvmf_init_br" 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:33:47.283 Cannot find device "nvmf_init_br2" 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:33:47.283 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:33:47.540 Cannot find device "nvmf_tgt_br" 00:33:47.540 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:33:47.540 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:33:47.540 Cannot find device "nvmf_tgt_br2" 00:33:47.540 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:33:47.540 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:33:47.540 Cannot find device "nvmf_init_br" 00:33:47.540 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:33:47.540 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:33:47.540 Cannot find device "nvmf_init_br2" 00:33:47.540 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:33:47.540 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:33:47.540 Cannot find device "nvmf_tgt_br" 00:33:47.540 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:33:47.540 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:33:47.540 Cannot find device "nvmf_tgt_br2" 00:33:47.540 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:33:47.540 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:33:47.540 Cannot find device "nvmf_br" 00:33:47.540 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:33:47.540 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:33:47.540 Cannot find device "nvmf_init_if" 00:33:47.540 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:33:47.540 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:33:47.540 Cannot find device "nvmf_init_if2" 00:33:47.540 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:33:47.540 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:33:47.540 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:47.540 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:33:47.541 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:33:47.541 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:33:47.541 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:33:47.541 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:33:47.541 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:33:47.541 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:33:47.541 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:33:47.541 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:33:47.541 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:33:47.541 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:33:47.541 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:33:47.541 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:33:47.541 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:33:47.541 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:33:47.541 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:33:47.541 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:33:47.541 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:33:47.541 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:33:47.541 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:33:47.541 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:33:47.541 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:33:47.541 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:33:47.541 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:33:47.541 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:33:47.541 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:33:47.541 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:33:47.541 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:33:47.541 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:33:47.541 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:33:47.541 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:33:47.541 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:33:47.541 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:33:47.541 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:33:47.541 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:33:47.541 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:33:47.541 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:33:47.541 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:33:47.541 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:33:47.541 00:33:47.541 --- 10.0.0.3 ping statistics --- 00:33:47.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:47.541 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:33:47.541 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:33:47.541 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:33:47.541 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.081 ms 00:33:47.541 00:33:47.541 --- 10.0.0.4 ping statistics --- 00:33:47.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:47.541 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:33:47.541 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:33:47.541 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:47.541 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:33:47.541 00:33:47.541 --- 10.0.0.1 ping statistics --- 00:33:47.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:47.541 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:33:47.541 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:33:47.541 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:47.541 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.033 ms 00:33:47.541 00:33:47.541 --- 10.0.0.2 ping statistics --- 00:33:47.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:47.541 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:33:47.541 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:47.541 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:33:47.541 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:47.541 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:47.541 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:47.541 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:47.541 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:47.541 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:47.541 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:47.798 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:33:47.798 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:33:47.799 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:33:47.799 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:47.799 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:47.799 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:47.799 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:33:47.799 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=103570 00:33:47.799 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 103570 00:33:47.799 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 103570 ']' 00:33:47.799 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:47.799 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:47.799 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:47.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:47.799 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:47.799 12:32:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:47.799 [2024-11-25 12:32:43.464042] thread.c:3055:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:47.799 [2024-11-25 12:32:43.464818] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:33:47.799 [2024-11-25 12:32:43.464856] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:47.799 [2024-11-25 12:32:43.606887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:47.799 [2024-11-25 12:32:43.643537] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:47.799 [2024-11-25 12:32:43.643568] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:47.799 [2024-11-25 12:32:43.643575] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:47.799 [2024-11-25 12:32:43.643580] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:47.799 [2024-11-25 12:32:43.643584] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:47.799 [2024-11-25 12:32:43.644244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:47.799 [2024-11-25 12:32:43.644458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:47.799 [2024-11-25 12:32:43.644993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:47.799 [2024-11-25 12:32:43.644995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:48.056 [2024-11-25 12:32:43.697323] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:48.056 [2024-11-25 12:32:43.698140] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:48.056 [2024-11-25 12:32:43.698714] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:48.056 [2024-11-25 12:32:43.698974] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:48.056 [2024-11-25 12:32:43.699391] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:48.645 12:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:48.645 12:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:33:48.645 12:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:48.645 12:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:48.645 12:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:48.645 12:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:48.645 12:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:48.911 [2024-11-25 12:32:44.549848] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:48.911 12:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:49.168 Malloc0 00:33:49.168 12:32:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:33:49.168 12:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:49.425 12:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:33:49.683 [2024-11-25 12:32:45.417686] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:33:49.683 12:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:33:49.941 [2024-11-25 12:32:45.613615] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:33:49.941 12:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid=5934b511-67a7-4c73-84f0-52fdf36bb79d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:33:49.941 12:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid=5934b511-67a7-4c73-84f0-52fdf36bb79d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:33:50.199 12:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:33:50.199 12:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:33:50.199 12:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:33:50.199 12:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:33:50.199 12:32:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:33:52.098 12:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:33:52.098 12:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:33:52.098 12:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:33:52.098 12:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:33:52.098 12:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:33:52.098 12:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:33:52.098 12:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:33:52.098 12:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:33:52.098 12:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:33:52.098 12:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:33:52.098 12:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:33:52.098 12:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:33:52.098 12:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:33:52.098 12:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:33:52.098 12:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:33:52.098 12:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:33:52.098 12:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:33:52.098 12:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:33:52.098 12:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:33:52.098 12:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:33:52.098 12:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:33:52.098 12:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:33:52.098 12:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:33:52.098 12:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:33:52.098 12:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:33:52.098 12:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:33:52.098 12:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:33:52.098 12:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:33:52.098 12:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:33:52.098 12:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:33:52.098 12:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:33:52.098 12:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:33:52.098 12:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=103702 00:33:52.098 12:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:33:52.098 12:32:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:33:52.098 [global] 00:33:52.098 thread=1 00:33:52.098 invalidate=1 00:33:52.098 rw=randrw 00:33:52.098 time_based=1 00:33:52.098 runtime=6 00:33:52.098 ioengine=libaio 00:33:52.098 direct=1 00:33:52.098 bs=4096 00:33:52.098 iodepth=128 00:33:52.098 norandommap=0 00:33:52.098 numjobs=1 00:33:52.098 00:33:52.098 verify_dump=1 00:33:52.098 verify_backlog=512 00:33:52.098 verify_state_save=0 00:33:52.098 do_verify=1 00:33:52.098 verify=crc32c-intel 00:33:52.098 [job0] 00:33:52.098 filename=/dev/nvme0n1 00:33:52.098 Could not set queue depth (nvme0n1) 00:33:52.356 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:52.356 fio-3.35 00:33:52.356 Starting 1 thread 00:33:53.288 12:32:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:33:53.288 12:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:33:53.545 12:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:33:53.545 12:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:33:53.545 12:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:33:53.545 12:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:33:53.545 12:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:33:53.545 12:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:33:53.545 12:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:33:53.545 12:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:33:53.545 12:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:33:53.545 12:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:33:53.545 12:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:33:53.545 12:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:33:53.545 12:32:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:33:54.476 12:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:33:54.476 12:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:33:54.476 12:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:33:54.476 12:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:33:54.734 12:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:33:54.993 12:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:33:54.993 12:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:33:54.993 12:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:33:54.993 12:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:33:54.993 12:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:33:54.993 12:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:33:54.993 12:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:33:54.993 12:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:33:54.993 12:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:33:54.993 12:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:33:54.993 12:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:33:54.993 12:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:33:54.993 12:32:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:33:55.926 12:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:33:55.926 12:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:33:55.926 12:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:33:55.926 12:32:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 103702 00:33:58.452 00:33:58.452 job0: (groupid=0, jobs=1): err= 0: pid=103723: Mon Nov 25 12:32:54 2024 00:33:58.452 read: IOPS=14.3k, BW=55.8MiB/s (58.5MB/s)(335MiB/6004msec) 00:33:58.452 slat (nsec): min=1429, max=5872.5k, avg=41180.18, stdev=213717.02 00:33:58.452 clat (usec): min=906, max=16836, avg=5990.88, stdev=1135.89 00:33:58.452 lat (usec): min=912, max=16843, avg=6032.06, stdev=1150.05 00:33:58.452 clat percentiles (usec): 00:33:58.452 | 1.00th=[ 3654], 5.00th=[ 4424], 10.00th=[ 4883], 20.00th=[ 5211], 00:33:58.452 | 30.00th=[ 5407], 40.00th=[ 5604], 50.00th=[ 5735], 60.00th=[ 5997], 00:33:58.452 | 70.00th=[ 6259], 80.00th=[ 6783], 90.00th=[ 7504], 95.00th=[ 8029], 00:33:58.452 | 99.00th=[ 9503], 99.50th=[10290], 99.90th=[11731], 99.95th=[13829], 00:33:58.452 | 99.99th=[16712] 00:33:58.452 bw ( KiB/s): min=11632, max=37576, per=51.13%, avg=29224.73, stdev=8552.45, samples=11 00:33:58.452 iops : min= 2908, max= 9394, avg=7306.18, stdev=2138.11, samples=11 00:33:58.452 write: IOPS=8623, BW=33.7MiB/s (35.3MB/s)(176MiB/5232msec); 0 zone resets 00:33:58.452 slat (usec): min=2, max=4418, avg=49.62, stdev=130.29 00:33:58.452 clat (usec): min=647, max=15994, avg=5488.20, stdev=943.31 00:33:58.452 lat (usec): min=667, max=16010, avg=5537.82, stdev=949.48 00:33:58.452 clat percentiles (usec): 00:33:58.452 | 1.00th=[ 3097], 5.00th=[ 4293], 10.00th=[ 4621], 20.00th=[ 4883], 00:33:58.452 | 30.00th=[ 5080], 40.00th=[ 5211], 50.00th=[ 5342], 60.00th=[ 5473], 00:33:58.452 | 70.00th=[ 5669], 80.00th=[ 6128], 90.00th=[ 6718], 95.00th=[ 7046], 00:33:58.452 | 99.00th=[ 7963], 99.50th=[ 9110], 99.90th=[11600], 99.95th=[14484], 00:33:58.452 | 99.99th=[15926] 00:33:58.452 bw ( KiB/s): min=12256, max=36992, per=84.87%, avg=29274.18, stdev=8129.18, samples=11 00:33:58.452 iops : min= 3064, max= 9248, avg=7318.55, stdev=2032.29, samples=11 00:33:58.452 lat (usec) : 750=0.01%, 1000=0.01% 00:33:58.452 lat (msec) : 2=0.02%, 4=2.49%, 10=96.93%, 20=0.55% 00:33:58.452 cpu : usr=3.28%, sys=16.79%, ctx=9964, majf=0, minf=139 00:33:58.452 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:33:58.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.452 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:58.452 issued rwts: total=85796,45117,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.452 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:58.452 00:33:58.452 Run status group 0 (all jobs): 00:33:58.452 READ: bw=55.8MiB/s (58.5MB/s), 55.8MiB/s-55.8MiB/s (58.5MB/s-58.5MB/s), io=335MiB (351MB), run=6004-6004msec 00:33:58.452 WRITE: bw=33.7MiB/s (35.3MB/s), 33.7MiB/s-33.7MiB/s (35.3MB/s-35.3MB/s), io=176MiB (185MB), run=5232-5232msec 00:33:58.452 00:33:58.452 Disk stats (read/write): 00:33:58.452 nvme0n1: ios=84665/44162, merge=0/0, ticks=482630/232702, in_queue=715332, util=98.62% 00:33:58.452 12:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:33:58.709 12:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:33:58.709 12:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:33:58.709 12:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:33:58.709 12:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:33:58.709 12:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:33:58.709 12:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:33:58.709 12:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:33:58.709 12:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:33:58.709 12:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:33:58.709 12:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:33:58.709 12:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:33:58.709 12:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:33:58.709 12:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:33:58.709 12:32:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:34:00.081 12:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:34:00.081 12:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:34:00.081 12:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:34:00.081 12:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:34:00.081 12:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=103850 00:34:00.081 12:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:34:00.081 12:32:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:34:00.081 [global] 00:34:00.081 thread=1 00:34:00.082 invalidate=1 00:34:00.082 rw=randrw 00:34:00.082 time_based=1 00:34:00.082 runtime=6 00:34:00.082 ioengine=libaio 00:34:00.082 direct=1 00:34:00.082 bs=4096 00:34:00.082 iodepth=128 00:34:00.082 norandommap=0 00:34:00.082 numjobs=1 00:34:00.082 00:34:00.082 verify_dump=1 00:34:00.082 verify_backlog=512 00:34:00.082 verify_state_save=0 00:34:00.082 do_verify=1 00:34:00.082 verify=crc32c-intel 00:34:00.082 [job0] 00:34:00.082 filename=/dev/nvme0n1 00:34:00.082 Could not set queue depth (nvme0n1) 00:34:00.082 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:00.082 fio-3.35 00:34:00.082 Starting 1 thread 00:34:01.012 12:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:34:01.012 12:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:34:01.269 12:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:34:01.269 12:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:34:01.269 12:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:34:01.269 12:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:34:01.269 12:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:34:01.269 12:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:34:01.269 12:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:34:01.269 12:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:34:01.269 12:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:34:01.269 12:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:34:01.269 12:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:34:01.269 12:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:34:01.269 12:32:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:34:02.200 12:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:34:02.200 12:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:34:02.200 12:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:34:02.200 12:32:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:34:02.457 12:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:34:02.716 12:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:34:02.716 12:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:34:02.716 12:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:34:02.716 12:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:34:02.716 12:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:34:02.716 12:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:34:02.716 12:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:34:02.716 12:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:34:02.716 12:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:34:02.716 12:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:34:02.716 12:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:34:02.716 12:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:34:02.716 12:32:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # sleep 1s 00:34:03.648 12:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:34:03.648 12:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:34:03.648 12:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:34:03.648 12:32:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 103850 00:34:06.174 00:34:06.174 job0: (groupid=0, jobs=1): err= 0: pid=103875: Mon Nov 25 12:33:01 2024 00:34:06.174 read: IOPS=16.8k, BW=65.6MiB/s (68.8MB/s)(394MiB/6002msec) 00:34:06.174 slat (nsec): min=1640, max=5271.2k, avg=31104.91, stdev=179650.45 00:34:06.174 clat (usec): min=190, max=14939, avg=5215.65, stdev=1137.53 00:34:06.174 lat (usec): min=199, max=14960, avg=5246.75, stdev=1152.65 00:34:06.174 clat percentiles (usec): 00:34:06.174 | 1.00th=[ 2057], 5.00th=[ 3130], 10.00th=[ 3785], 20.00th=[ 4555], 00:34:06.174 | 30.00th=[ 4948], 40.00th=[ 5145], 50.00th=[ 5276], 60.00th=[ 5473], 00:34:06.174 | 70.00th=[ 5669], 80.00th=[ 5866], 90.00th=[ 6325], 95.00th=[ 6980], 00:34:06.174 | 99.00th=[ 8291], 99.50th=[ 8717], 99.90th=[10683], 99.95th=[12125], 00:34:06.174 | 99.99th=[13304] 00:34:06.174 bw ( KiB/s): min=20032, max=49976, per=52.76%, avg=35424.00, stdev=10540.95, samples=11 00:34:06.174 iops : min= 5008, max=12494, avg=8856.00, stdev=2635.24, samples=11 00:34:06.174 write: IOPS=10.1k, BW=39.3MiB/s (41.3MB/s)(201MiB/5108msec); 0 zone resets 00:34:06.174 slat (usec): min=2, max=2401, avg=37.64, stdev=98.82 00:34:06.174 clat (usec): min=182, max=9814, avg=4597.93, stdev=988.19 00:34:06.174 lat (usec): min=206, max=9829, avg=4635.57, stdev=997.02 00:34:06.174 clat percentiles (usec): 00:34:06.174 | 1.00th=[ 1958], 5.00th=[ 2638], 10.00th=[ 3064], 20.00th=[ 3851], 00:34:06.174 | 30.00th=[ 4424], 40.00th=[ 4686], 50.00th=[ 4817], 60.00th=[ 5014], 00:34:06.174 | 70.00th=[ 5145], 80.00th=[ 5276], 90.00th=[ 5473], 95.00th=[ 5669], 00:34:06.174 | 99.00th=[ 7177], 99.50th=[ 7701], 99.90th=[ 8717], 99.95th=[ 8979], 00:34:06.174 | 99.99th=[ 9503] 00:34:06.174 bw ( KiB/s): min=20664, max=49152, per=87.88%, avg=35403.64, stdev=10164.32, samples=11 00:34:06.174 iops : min= 5166, max=12288, avg=8850.91, stdev=2541.08, samples=11 00:34:06.174 lat (usec) : 250=0.01%, 500=0.05%, 750=0.05%, 1000=0.08% 00:34:06.174 lat (msec) : 2=0.78%, 4=14.71%, 10=84.22%, 20=0.11% 00:34:06.174 cpu : usr=3.28%, sys=18.95%, ctx=11568, majf=0, minf=127 00:34:06.174 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:34:06.174 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:06.174 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:06.174 issued rwts: total=100744,51447,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:06.174 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:06.174 00:34:06.174 Run status group 0 (all jobs): 00:34:06.174 READ: bw=65.6MiB/s (68.8MB/s), 65.6MiB/s-65.6MiB/s (68.8MB/s-68.8MB/s), io=394MiB (413MB), run=6002-6002msec 00:34:06.174 WRITE: bw=39.3MiB/s (41.3MB/s), 39.3MiB/s-39.3MiB/s (41.3MB/s-41.3MB/s), io=201MiB (211MB), run=5108-5108msec 00:34:06.174 00:34:06.174 Disk stats (read/write): 00:34:06.174 nvme0n1: ios=98585/51447, merge=0/0, ticks=488294/225993, in_queue=714287, util=98.75% 00:34:06.174 12:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:06.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:34:06.174 12:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:06.174 12:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:34:06.174 12:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:34:06.174 12:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:06.174 12:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:06.174 12:33:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:34:06.174 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:34:06.174 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:06.433 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:34:06.433 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:34:06.433 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:34:06.433 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:34:06.433 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:06.433 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:34:06.433 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:06.433 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:34:06.433 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:06.433 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:06.433 rmmod nvme_tcp 00:34:06.433 rmmod nvme_fabrics 00:34:06.433 rmmod nvme_keyring 00:34:06.433 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:06.433 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:34:06.433 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:34:06.433 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 103570 ']' 00:34:06.433 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 103570 00:34:06.433 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 103570 ']' 00:34:06.433 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 103570 00:34:06.433 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:34:06.433 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:06.433 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103570 00:34:06.433 killing process with pid 103570 00:34:06.433 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:06.433 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:06.433 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103570' 00:34:06.433 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 103570 00:34:06.433 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 103570 00:34:06.690 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:06.690 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:06.690 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:06.690 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:34:06.690 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:34:06.690 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:34:06.690 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:06.690 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:06.690 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:34:06.690 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:34:06.690 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:34:06.690 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:34:06.690 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:34:06.690 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:34:06.690 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:34:06.690 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:34:06.690 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:34:06.690 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:34:06.690 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:34:06.948 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:34:06.948 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:06.948 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:06.948 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:34:06.948 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:06.948 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:06.948 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:06.948 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:34:06.948 00:34:06.948 real 0m19.695s 00:34:06.948 user 1m8.849s 00:34:06.948 sys 0m6.147s 00:34:06.948 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:06.948 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:34:06.948 ************************************ 00:34:06.948 END TEST nvmf_target_multipath 00:34:06.948 ************************************ 00:34:06.948 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:34:06.948 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:06.948 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:06.948 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:06.948 ************************************ 00:34:06.948 START TEST nvmf_zcopy 00:34:06.948 ************************************ 00:34:06.948 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:34:06.948 * Looking for test storage... 00:34:06.948 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:34:06.948 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:06.948 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:06.949 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:34:07.207 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:07.207 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:07.207 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:07.207 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:07.207 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:34:07.207 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:34:07.207 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:34:07.207 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:34:07.207 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:34:07.207 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:34:07.207 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:34:07.207 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:07.207 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:34:07.207 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:34:07.207 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:07.207 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:07.207 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:34:07.207 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:34:07.207 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:07.207 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:34:07.207 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:34:07.207 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:34:07.207 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:34:07.207 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:07.207 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:34:07.207 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:34:07.207 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:07.207 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:07.207 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:07.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.208 --rc genhtml_branch_coverage=1 00:34:07.208 --rc genhtml_function_coverage=1 00:34:07.208 --rc genhtml_legend=1 00:34:07.208 --rc geninfo_all_blocks=1 00:34:07.208 --rc geninfo_unexecuted_blocks=1 00:34:07.208 00:34:07.208 ' 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:07.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.208 --rc genhtml_branch_coverage=1 00:34:07.208 --rc genhtml_function_coverage=1 00:34:07.208 --rc genhtml_legend=1 00:34:07.208 --rc geninfo_all_blocks=1 00:34:07.208 --rc geninfo_unexecuted_blocks=1 00:34:07.208 00:34:07.208 ' 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:07.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.208 --rc genhtml_branch_coverage=1 00:34:07.208 --rc genhtml_function_coverage=1 00:34:07.208 --rc genhtml_legend=1 00:34:07.208 --rc geninfo_all_blocks=1 00:34:07.208 --rc geninfo_unexecuted_blocks=1 00:34:07.208 00:34:07.208 ' 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:07.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:07.208 --rc genhtml_branch_coverage=1 00:34:07.208 --rc genhtml_function_coverage=1 00:34:07.208 --rc genhtml_legend=1 00:34:07.208 --rc geninfo_all_blocks=1 00:34:07.208 --rc geninfo_unexecuted_blocks=1 00:34:07.208 00:34:07.208 ' 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:34:07.208 Cannot find device "nvmf_init_br" 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:34:07.208 Cannot find device "nvmf_init_br2" 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:34:07.208 Cannot find device "nvmf_tgt_br" 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:34:07.208 Cannot find device "nvmf_tgt_br2" 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:34:07.208 Cannot find device "nvmf_init_br" 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:34:07.208 Cannot find device "nvmf_init_br2" 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:34:07.208 Cannot find device "nvmf_tgt_br" 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:34:07.208 Cannot find device "nvmf_tgt_br2" 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:34:07.208 Cannot find device "nvmf_br" 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:34:07.208 Cannot find device "nvmf_init_if" 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:34:07.208 Cannot find device "nvmf_init_if2" 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:07.208 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:07.208 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:34:07.208 12:33:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:34:07.208 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:34:07.208 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:34:07.208 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:34:07.208 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:34:07.208 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:34:07.208 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:34:07.208 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:34:07.208 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:34:07.208 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:34:07.208 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:34:07.208 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:34:07.208 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:34:07.208 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:34:07.208 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:34:07.208 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:34:07.208 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:34:07.465 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:34:07.465 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:34:07.465 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:34:07.465 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:34:07.465 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:34:07.465 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:34:07.465 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:34:07.465 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:34:07.465 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:34:07.465 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:34:07.465 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:34:07.465 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:34:07.465 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:34:07.465 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:34:07.465 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:34:07.465 00:34:07.465 --- 10.0.0.3 ping statistics --- 00:34:07.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:07.465 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:34:07.465 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:34:07.465 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:34:07.465 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:34:07.465 00:34:07.465 --- 10.0.0.4 ping statistics --- 00:34:07.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:07.465 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:34:07.465 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:34:07.465 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:07.465 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:34:07.465 00:34:07.465 --- 10.0.0.1 ping statistics --- 00:34:07.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:07.465 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:34:07.465 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:34:07.465 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:07.465 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:34:07.465 00:34:07.465 --- 10.0.0.2 ping statistics --- 00:34:07.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:07.465 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:34:07.465 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:07.465 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:34:07.465 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:07.465 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:07.465 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:07.465 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:07.465 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:07.465 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:07.465 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:07.465 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:34:07.465 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:07.465 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:07.465 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:07.465 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=104199 00:34:07.465 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 104199 00:34:07.465 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 104199 ']' 00:34:07.465 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:07.465 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:07.466 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:07.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:07.466 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:34:07.466 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:07.466 12:33:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:07.466 [2024-11-25 12:33:03.202696] thread.c:3055:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:07.466 [2024-11-25 12:33:03.203482] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:34:07.466 [2024-11-25 12:33:03.203530] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:07.723 [2024-11-25 12:33:03.342601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:07.723 [2024-11-25 12:33:03.379412] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:07.723 [2024-11-25 12:33:03.379450] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:07.723 [2024-11-25 12:33:03.379456] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:07.723 [2024-11-25 12:33:03.379460] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:07.723 [2024-11-25 12:33:03.379464] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:07.723 [2024-11-25 12:33:03.379696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:07.723 [2024-11-25 12:33:03.445906] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:07.723 [2024-11-25 12:33:03.446140] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:08.289 12:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:08.289 12:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:34:08.289 12:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:08.289 12:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:08.289 12:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:08.289 12:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:08.289 12:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:34:08.289 12:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:34:08.289 12:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.289 12:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:08.289 [2024-11-25 12:33:04.120448] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:08.289 12:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.289 12:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:34:08.289 12:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.289 12:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:08.289 12:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.289 12:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:34:08.289 12:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.289 12:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:08.289 [2024-11-25 12:33:04.140397] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:34:08.289 12:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.289 12:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:34:08.289 12:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.289 12:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:08.289 12:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.289 12:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:34:08.289 12:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.289 12:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:08.546 malloc0 00:34:08.546 12:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.547 12:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:34:08.547 12:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.547 12:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:08.547 12:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.547 12:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:34:08.547 12:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:34:08.547 12:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:34:08.547 12:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:34:08.547 12:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:08.547 12:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:08.547 { 00:34:08.547 "params": { 00:34:08.547 "name": "Nvme$subsystem", 00:34:08.547 "trtype": "$TEST_TRANSPORT", 00:34:08.547 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:08.547 "adrfam": "ipv4", 00:34:08.547 "trsvcid": "$NVMF_PORT", 00:34:08.547 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:08.547 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:08.547 "hdgst": ${hdgst:-false}, 00:34:08.547 "ddgst": ${ddgst:-false} 00:34:08.547 }, 00:34:08.547 "method": "bdev_nvme_attach_controller" 00:34:08.547 } 00:34:08.547 EOF 00:34:08.547 )") 00:34:08.547 12:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:34:08.547 12:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:34:08.547 12:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:34:08.547 12:33:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:08.547 "params": { 00:34:08.547 "name": "Nvme1", 00:34:08.547 "trtype": "tcp", 00:34:08.547 "traddr": "10.0.0.3", 00:34:08.547 "adrfam": "ipv4", 00:34:08.547 "trsvcid": "4420", 00:34:08.547 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:08.547 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:08.547 "hdgst": false, 00:34:08.547 "ddgst": false 00:34:08.547 }, 00:34:08.547 "method": "bdev_nvme_attach_controller" 00:34:08.547 }' 00:34:08.547 [2024-11-25 12:33:04.215074] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:34:08.547 [2024-11-25 12:33:04.215131] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104250 ] 00:34:08.547 [2024-11-25 12:33:04.355389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:08.547 [2024-11-25 12:33:04.396560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:08.804 Running I/O for 10 seconds... 00:34:11.111 8791.00 IOPS, 68.68 MiB/s [2024-11-25T12:33:07.557Z] 8827.00 IOPS, 68.96 MiB/s [2024-11-25T12:33:08.934Z] 8836.33 IOPS, 69.03 MiB/s [2024-11-25T12:33:09.866Z] 8875.50 IOPS, 69.34 MiB/s [2024-11-25T12:33:10.798Z] 8864.60 IOPS, 69.25 MiB/s [2024-11-25T12:33:11.734Z] 8846.83 IOPS, 69.12 MiB/s [2024-11-25T12:33:12.668Z] 8834.00 IOPS, 69.02 MiB/s [2024-11-25T12:33:13.600Z] 8815.50 IOPS, 68.87 MiB/s [2024-11-25T12:33:14.975Z] 8814.11 IOPS, 68.86 MiB/s [2024-11-25T12:33:14.975Z] 8813.30 IOPS, 68.85 MiB/s 00:34:19.099 Latency(us) 00:34:19.099 [2024-11-25T12:33:14.975Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:19.099 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:34:19.100 Verification LBA range: start 0x0 length 0x1000 00:34:19.100 Nvme1n1 : 10.01 8815.23 68.87 0.00 0.00 14478.04 2293.76 20870.70 00:34:19.100 [2024-11-25T12:33:14.976Z] =================================================================================================================== 00:34:19.100 [2024-11-25T12:33:14.976Z] Total : 8815.23 68.87 0.00 0.00 14478.04 2293.76 20870.70 00:34:19.100 12:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=104362 00:34:19.100 12:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:34:19.100 12:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:19.100 12:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:34:19.100 12:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:34:19.100 12:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:34:19.100 12:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:34:19.100 12:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:19.100 12:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:19.100 { 00:34:19.100 "params": { 00:34:19.100 "name": "Nvme$subsystem", 00:34:19.100 "trtype": "$TEST_TRANSPORT", 00:34:19.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:19.100 "adrfam": "ipv4", 00:34:19.100 "trsvcid": "$NVMF_PORT", 00:34:19.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:19.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:19.100 "hdgst": ${hdgst:-false}, 00:34:19.100 "ddgst": ${ddgst:-false} 00:34:19.100 }, 00:34:19.100 "method": "bdev_nvme_attach_controller" 00:34:19.100 } 00:34:19.100 EOF 00:34:19.100 )") 00:34:19.100 12:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:34:19.100 [2024-11-25 12:33:14.704063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.100 [2024-11-25 12:33:14.704107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.100 12:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:34:19.100 12:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:34:19.100 12:33:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:19.100 "params": { 00:34:19.100 "name": "Nvme1", 00:34:19.100 "trtype": "tcp", 00:34:19.100 "traddr": "10.0.0.3", 00:34:19.100 "adrfam": "ipv4", 00:34:19.100 "trsvcid": "4420", 00:34:19.100 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:19.100 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:19.100 "hdgst": false, 00:34:19.100 "ddgst": false 00:34:19.100 }, 00:34:19.100 "method": "bdev_nvme_attach_controller" 00:34:19.100 }' 00:34:19.100 2024/11/25 12:33:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.100 [2024-11-25 12:33:14.712011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.100 [2024-11-25 12:33:14.712027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.100 2024/11/25 12:33:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.100 [2024-11-25 12:33:14.720007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.100 [2024-11-25 12:33:14.720022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.100 2024/11/25 12:33:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.100 [2024-11-25 12:33:14.728012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.100 [2024-11-25 12:33:14.728030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.100 2024/11/25 12:33:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.100 [2024-11-25 12:33:14.736006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.100 [2024-11-25 12:33:14.736024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.100 2024/11/25 12:33:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.100 [2024-11-25 12:33:14.741807] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:34:19.100 [2024-11-25 12:33:14.741862] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104362 ] 00:34:19.100 [2024-11-25 12:33:14.744006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.100 [2024-11-25 12:33:14.744023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.100 2024/11/25 12:33:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.100 [2024-11-25 12:33:14.752008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.100 [2024-11-25 12:33:14.752026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.100 2024/11/25 12:33:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.100 [2024-11-25 12:33:14.760010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.100 [2024-11-25 12:33:14.760029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.100 2024/11/25 12:33:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.100 [2024-11-25 12:33:14.768012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.100 [2024-11-25 12:33:14.768031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.100 2024/11/25 12:33:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.100 [2024-11-25 12:33:14.776010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.100 [2024-11-25 12:33:14.776028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.100 2024/11/25 12:33:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.100 [2024-11-25 12:33:14.784010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.100 [2024-11-25 12:33:14.784027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.100 2024/11/25 12:33:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.100 [2024-11-25 12:33:14.792009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.100 [2024-11-25 12:33:14.792027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.100 2024/11/25 12:33:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.100 [2024-11-25 12:33:14.804013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.100 [2024-11-25 12:33:14.804031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.100 2024/11/25 12:33:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.100 [2024-11-25 12:33:14.812011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.100 [2024-11-25 12:33:14.812029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.100 2024/11/25 12:33:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.100 [2024-11-25 12:33:14.820008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.100 [2024-11-25 12:33:14.820025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.100 2024/11/25 12:33:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.100 [2024-11-25 12:33:14.828007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.100 [2024-11-25 12:33:14.828024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.100 2024/11/25 12:33:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.100 [2024-11-25 12:33:14.836008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.100 [2024-11-25 12:33:14.836024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.100 2024/11/25 12:33:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.100 [2024-11-25 12:33:14.844008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.100 [2024-11-25 12:33:14.844025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.101 2024/11/25 12:33:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.101 [2024-11-25 12:33:14.852008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.101 [2024-11-25 12:33:14.852023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.101 2024/11/25 12:33:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.101 [2024-11-25 12:33:14.860007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.101 [2024-11-25 12:33:14.860022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.101 2024/11/25 12:33:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.101 [2024-11-25 12:33:14.868008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.101 [2024-11-25 12:33:14.868023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.101 2024/11/25 12:33:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.101 [2024-11-25 12:33:14.876009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.101 [2024-11-25 12:33:14.876025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.101 [2024-11-25 12:33:14.878474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:19.101 2024/11/25 12:33:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.101 [2024-11-25 12:33:14.884009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.101 [2024-11-25 12:33:14.884031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.101 2024/11/25 12:33:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.101 [2024-11-25 12:33:14.892009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.101 [2024-11-25 12:33:14.892029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.101 2024/11/25 12:33:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.101 [2024-11-25 12:33:14.900007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.101 [2024-11-25 12:33:14.900022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.101 2024/11/25 12:33:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.101 [2024-11-25 12:33:14.908008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.101 [2024-11-25 12:33:14.908023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.101 2024/11/25 12:33:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.101 [2024-11-25 12:33:14.916009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.101 [2024-11-25 12:33:14.916025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.101 [2024-11-25 12:33:14.919244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:19.101 2024/11/25 12:33:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.101 [2024-11-25 12:33:14.924008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.101 [2024-11-25 12:33:14.924023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.101 2024/11/25 12:33:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.101 [2024-11-25 12:33:14.932019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.101 [2024-11-25 12:33:14.932040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.101 2024/11/25 12:33:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.101 [2024-11-25 12:33:14.940016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.101 [2024-11-25 12:33:14.940035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.101 2024/11/25 12:33:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.101 [2024-11-25 12:33:14.948016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.101 [2024-11-25 12:33:14.948033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.101 2024/11/25 12:33:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.101 [2024-11-25 12:33:14.956010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.101 [2024-11-25 12:33:14.956025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.101 2024/11/25 12:33:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.101 [2024-11-25 12:33:14.964008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.101 [2024-11-25 12:33:14.964022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.101 2024/11/25 12:33:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.101 [2024-11-25 12:33:14.972010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.101 [2024-11-25 12:33:14.972025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.361 2024/11/25 12:33:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.361 [2024-11-25 12:33:14.980011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.361 [2024-11-25 12:33:14.980025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.361 2024/11/25 12:33:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.361 [2024-11-25 12:33:14.988009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.361 [2024-11-25 12:33:14.988024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.361 2024/11/25 12:33:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.361 [2024-11-25 12:33:14.996009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.361 [2024-11-25 12:33:14.996024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.361 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.361 [2024-11-25 12:33:15.004017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.361 [2024-11-25 12:33:15.004036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.361 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.361 [2024-11-25 12:33:15.012016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.361 [2024-11-25 12:33:15.012036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.361 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.361 [2024-11-25 12:33:15.020012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.361 [2024-11-25 12:33:15.020029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.361 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.361 [2024-11-25 12:33:15.028010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.361 [2024-11-25 12:33:15.028026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.361 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.361 [2024-11-25 12:33:15.036079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.361 [2024-11-25 12:33:15.036096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.361 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.361 [2024-11-25 12:33:15.044011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.361 [2024-11-25 12:33:15.044027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.361 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.361 [2024-11-25 12:33:15.052013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.361 [2024-11-25 12:33:15.052031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.361 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.361 [2024-11-25 12:33:15.060017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.361 [2024-11-25 12:33:15.060038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.361 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.361 [2024-11-25 12:33:15.068013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.361 [2024-11-25 12:33:15.068030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.361 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.361 Running I/O for 5 seconds... 00:34:19.361 [2024-11-25 12:33:15.081801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.361 [2024-11-25 12:33:15.081824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.361 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.361 [2024-11-25 12:33:15.095923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.361 [2024-11-25 12:33:15.095944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.361 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.361 [2024-11-25 12:33:15.103140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.361 [2024-11-25 12:33:15.103161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.361 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.361 [2024-11-25 12:33:15.116820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.361 [2024-11-25 12:33:15.116839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.361 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.361 [2024-11-25 12:33:15.128543] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.361 [2024-11-25 12:33:15.128562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.361 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.361 [2024-11-25 12:33:15.141302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.361 [2024-11-25 12:33:15.141322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.361 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.361 [2024-11-25 12:33:15.153419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.361 [2024-11-25 12:33:15.153438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.361 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.361 [2024-11-25 12:33:15.164117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.361 [2024-11-25 12:33:15.164136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.361 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.361 [2024-11-25 12:33:15.170516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.361 [2024-11-25 12:33:15.170535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.362 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.362 [2024-11-25 12:33:15.184596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.362 [2024-11-25 12:33:15.184614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.362 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.362 [2024-11-25 12:33:15.196018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.362 [2024-11-25 12:33:15.196036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.362 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.362 [2024-11-25 12:33:15.202280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.362 [2024-11-25 12:33:15.202299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.362 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.362 [2024-11-25 12:33:15.210366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.362 [2024-11-25 12:33:15.210383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.362 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.362 [2024-11-25 12:33:15.224454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.362 [2024-11-25 12:33:15.224473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.362 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.621 [2024-11-25 12:33:15.237018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.621 [2024-11-25 12:33:15.237037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.621 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.621 [2024-11-25 12:33:15.249440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.621 [2024-11-25 12:33:15.249459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.621 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.621 [2024-11-25 12:33:15.260185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.621 [2024-11-25 12:33:15.260205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.621 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.621 [2024-11-25 12:33:15.266724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.621 [2024-11-25 12:33:15.266743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.621 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.621 [2024-11-25 12:33:15.275770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.621 [2024-11-25 12:33:15.275790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.621 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.621 [2024-11-25 12:33:15.289591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.621 [2024-11-25 12:33:15.289612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.621 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.621 [2024-11-25 12:33:15.299190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.621 [2024-11-25 12:33:15.299209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.621 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.621 [2024-11-25 12:33:15.313587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.621 [2024-11-25 12:33:15.313607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.621 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.621 [2024-11-25 12:33:15.323022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.621 [2024-11-25 12:33:15.323041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.621 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.621 [2024-11-25 12:33:15.337313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.621 [2024-11-25 12:33:15.337335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.621 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.621 [2024-11-25 12:33:15.348146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.621 [2024-11-25 12:33:15.348171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.622 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.622 [2024-11-25 12:33:15.354562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.622 [2024-11-25 12:33:15.354582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.622 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.622 [2024-11-25 12:33:15.362703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.622 [2024-11-25 12:33:15.362725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.622 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.622 [2024-11-25 12:33:15.376507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.622 [2024-11-25 12:33:15.376529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.622 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.622 [2024-11-25 12:33:15.388592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.622 [2024-11-25 12:33:15.388613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.622 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.622 [2024-11-25 12:33:15.401681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.622 [2024-11-25 12:33:15.401701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.622 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.622 [2024-11-25 12:33:15.412253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.622 [2024-11-25 12:33:15.412272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.622 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.622 [2024-11-25 12:33:15.425816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.622 [2024-11-25 12:33:15.425836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.622 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.622 [2024-11-25 12:33:15.435537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.622 [2024-11-25 12:33:15.435556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.622 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.622 [2024-11-25 12:33:15.449228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.622 [2024-11-25 12:33:15.449247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.622 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.622 [2024-11-25 12:33:15.458759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.622 [2024-11-25 12:33:15.458778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.622 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.622 [2024-11-25 12:33:15.473120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.622 [2024-11-25 12:33:15.473143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.622 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.622 [2024-11-25 12:33:15.484155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.622 [2024-11-25 12:33:15.484175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.622 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.622 [2024-11-25 12:33:15.490705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.622 [2024-11-25 12:33:15.490725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.622 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.881 [2024-11-25 12:33:15.498668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.881 [2024-11-25 12:33:15.498688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.881 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.881 [2024-11-25 12:33:15.513027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.881 [2024-11-25 12:33:15.513051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.881 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.881 [2024-11-25 12:33:15.524046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.881 [2024-11-25 12:33:15.524067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.881 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.881 [2024-11-25 12:33:15.530683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.881 [2024-11-25 12:33:15.530703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.881 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.881 [2024-11-25 12:33:15.539936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.881 [2024-11-25 12:33:15.539959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.881 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.881 [2024-11-25 12:33:15.546689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.881 [2024-11-25 12:33:15.546707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.881 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.881 [2024-11-25 12:33:15.554713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.881 [2024-11-25 12:33:15.554731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.881 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.881 [2024-11-25 12:33:15.569050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.881 [2024-11-25 12:33:15.569070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.881 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.881 [2024-11-25 12:33:15.579568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.881 [2024-11-25 12:33:15.579587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.881 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.881 [2024-11-25 12:33:15.593373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.881 [2024-11-25 12:33:15.593394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.881 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.881 [2024-11-25 12:33:15.604229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.881 [2024-11-25 12:33:15.604249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.881 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.881 [2024-11-25 12:33:15.617851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.881 [2024-11-25 12:33:15.617871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.881 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.881 [2024-11-25 12:33:15.632174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.881 [2024-11-25 12:33:15.632195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.881 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.881 [2024-11-25 12:33:15.639548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.881 [2024-11-25 12:33:15.639568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.881 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.881 [2024-11-25 12:33:15.653534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.881 [2024-11-25 12:33:15.653555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.881 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.881 [2024-11-25 12:33:15.663500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.881 [2024-11-25 12:33:15.663519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.881 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.882 [2024-11-25 12:33:15.677996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.882 [2024-11-25 12:33:15.678016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.882 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.882 [2024-11-25 12:33:15.692266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.882 [2024-11-25 12:33:15.692290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.882 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.882 [2024-11-25 12:33:15.699650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.882 [2024-11-25 12:33:15.699671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.882 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.882 [2024-11-25 12:33:15.711739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.882 [2024-11-25 12:33:15.711770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.882 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.882 [2024-11-25 12:33:15.725577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.882 [2024-11-25 12:33:15.725598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.882 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.882 [2024-11-25 12:33:15.735233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.882 [2024-11-25 12:33:15.735254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.882 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:19.882 [2024-11-25 12:33:15.749796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:19.882 [2024-11-25 12:33:15.749817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:19.882 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.140 [2024-11-25 12:33:15.764375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.140 [2024-11-25 12:33:15.764404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.140 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.140 [2024-11-25 12:33:15.776555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.140 [2024-11-25 12:33:15.776586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.140 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.140 [2024-11-25 12:33:15.790260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.140 [2024-11-25 12:33:15.790288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.140 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.140 [2024-11-25 12:33:15.804746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.140 [2024-11-25 12:33:15.804778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.140 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.140 [2024-11-25 12:33:15.815845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.140 [2024-11-25 12:33:15.815868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.140 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.140 [2024-11-25 12:33:15.829600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.140 [2024-11-25 12:33:15.829622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.140 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.140 [2024-11-25 12:33:15.839272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.140 [2024-11-25 12:33:15.839294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.140 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.140 [2024-11-25 12:33:15.853601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.140 [2024-11-25 12:33:15.853621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.140 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.140 [2024-11-25 12:33:15.863979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.140 [2024-11-25 12:33:15.863998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.140 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.140 [2024-11-25 12:33:15.870381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.140 [2024-11-25 12:33:15.870399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.140 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.140 [2024-11-25 12:33:15.878555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.140 [2024-11-25 12:33:15.878574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.141 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.141 [2024-11-25 12:33:15.892624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.141 [2024-11-25 12:33:15.892642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.141 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.141 [2024-11-25 12:33:15.903829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.141 [2024-11-25 12:33:15.903848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.141 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.141 [2024-11-25 12:33:15.917436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.141 [2024-11-25 12:33:15.917455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.141 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.141 [2024-11-25 12:33:15.927828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.141 [2024-11-25 12:33:15.927847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.141 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.141 [2024-11-25 12:33:15.941655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.141 [2024-11-25 12:33:15.941674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.141 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.141 [2024-11-25 12:33:15.951210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.141 [2024-11-25 12:33:15.951228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.141 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.141 [2024-11-25 12:33:15.964911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.141 [2024-11-25 12:33:15.964929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.141 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.141 [2024-11-25 12:33:15.975396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.141 [2024-11-25 12:33:15.975414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.141 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.141 [2024-11-25 12:33:15.989502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.141 [2024-11-25 12:33:15.989521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.141 2024/11/25 12:33:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.141 [2024-11-25 12:33:15.998993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.141 [2024-11-25 12:33:15.999011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.141 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.141 [2024-11-25 12:33:16.013416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.141 [2024-11-25 12:33:16.013435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.400 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.400 [2024-11-25 12:33:16.023869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.400 [2024-11-25 12:33:16.023888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.400 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.400 [2024-11-25 12:33:16.037582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.400 [2024-11-25 12:33:16.037600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.400 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.400 [2024-11-25 12:33:16.047042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.400 [2024-11-25 12:33:16.047060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.400 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.400 [2024-11-25 12:33:16.060952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.400 [2024-11-25 12:33:16.060970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.400 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.400 [2024-11-25 12:33:16.071488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.400 [2024-11-25 12:33:16.071506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.400 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.400 17208.00 IOPS, 134.44 MiB/s [2024-11-25T12:33:16.276Z] [2024-11-25 12:33:16.085448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.400 [2024-11-25 12:33:16.085468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.400 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.400 [2024-11-25 12:33:16.095043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.400 [2024-11-25 12:33:16.095061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.400 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.400 [2024-11-25 12:33:16.109353] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.400 [2024-11-25 12:33:16.109371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.400 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.400 [2024-11-25 12:33:16.119911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.400 [2024-11-25 12:33:16.119930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.400 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.400 [2024-11-25 12:33:16.133739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.400 [2024-11-25 12:33:16.133766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.400 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.400 [2024-11-25 12:33:16.144146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.400 [2024-11-25 12:33:16.144165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.400 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.400 [2024-11-25 12:33:16.150331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.400 [2024-11-25 12:33:16.150348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.400 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.400 [2024-11-25 12:33:16.159522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.400 [2024-11-25 12:33:16.159540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.400 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.400 [2024-11-25 12:33:16.173995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.400 [2024-11-25 12:33:16.174015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.400 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.400 [2024-11-25 12:33:16.187902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.401 [2024-11-25 12:33:16.187920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.401 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.401 [2024-11-25 12:33:16.200884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.401 [2024-11-25 12:33:16.200903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.401 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.401 [2024-11-25 12:33:16.213436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.401 [2024-11-25 12:33:16.213455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.401 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.401 [2024-11-25 12:33:16.223960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.401 [2024-11-25 12:33:16.223978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.401 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.401 [2024-11-25 12:33:16.230133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.401 [2024-11-25 12:33:16.230151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.401 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.401 [2024-11-25 12:33:16.240128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.401 [2024-11-25 12:33:16.240147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.401 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.401 [2024-11-25 12:33:16.246466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.401 [2024-11-25 12:33:16.246484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.401 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.401 [2024-11-25 12:33:16.254533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.401 [2024-11-25 12:33:16.254552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.401 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.401 [2024-11-25 12:33:16.268698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.401 [2024-11-25 12:33:16.268717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.401 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.665 [2024-11-25 12:33:16.279605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.665 [2024-11-25 12:33:16.279623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.665 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.665 [2024-11-25 12:33:16.293442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.665 [2024-11-25 12:33:16.293464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.665 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.665 [2024-11-25 12:33:16.302082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.665 [2024-11-25 12:33:16.302102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.665 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.665 [2024-11-25 12:33:16.311613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.665 [2024-11-25 12:33:16.311632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.665 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.665 [2024-11-25 12:33:16.325991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.665 [2024-11-25 12:33:16.326015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.665 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.665 [2024-11-25 12:33:16.335869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.665 [2024-11-25 12:33:16.335890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.665 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.665 [2024-11-25 12:33:16.349497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.665 [2024-11-25 12:33:16.349517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.665 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.665 [2024-11-25 12:33:16.363687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.665 [2024-11-25 12:33:16.363709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.665 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.665 [2024-11-25 12:33:16.376686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.665 [2024-11-25 12:33:16.376705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.665 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.665 [2024-11-25 12:33:16.389529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.665 [2024-11-25 12:33:16.389547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.665 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.665 [2024-11-25 12:33:16.399690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.665 [2024-11-25 12:33:16.399707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.665 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.665 [2024-11-25 12:33:16.413853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.665 [2024-11-25 12:33:16.413870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.665 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.665 [2024-11-25 12:33:16.427471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.665 [2024-11-25 12:33:16.427489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.665 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.665 [2024-11-25 12:33:16.441611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.665 [2024-11-25 12:33:16.441629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.665 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.665 [2024-11-25 12:33:16.452105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.665 [2024-11-25 12:33:16.452123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.665 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.665 [2024-11-25 12:33:16.458340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.665 [2024-11-25 12:33:16.458358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.665 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.665 [2024-11-25 12:33:16.466141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.665 [2024-11-25 12:33:16.466158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.665 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.665 [2024-11-25 12:33:16.474483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.665 [2024-11-25 12:33:16.474501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.665 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.665 [2024-11-25 12:33:16.488437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.665 [2024-11-25 12:33:16.488455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.665 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.665 [2024-11-25 12:33:16.499347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.665 [2024-11-25 12:33:16.499366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.665 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.665 [2024-11-25 12:33:16.513791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.665 [2024-11-25 12:33:16.513810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.666 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.666 [2024-11-25 12:33:16.523814] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.666 [2024-11-25 12:33:16.523833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.666 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.967 [2024-11-25 12:33:16.537497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.967 [2024-11-25 12:33:16.537518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.967 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.967 [2024-11-25 12:33:16.548145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.967 [2024-11-25 12:33:16.548166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.967 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.967 [2024-11-25 12:33:16.554675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.967 [2024-11-25 12:33:16.554695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.967 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.967 [2024-11-25 12:33:16.562683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.967 [2024-11-25 12:33:16.562701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.967 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.967 [2024-11-25 12:33:16.576784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.967 [2024-11-25 12:33:16.576803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.967 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.967 [2024-11-25 12:33:16.587627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.967 [2024-11-25 12:33:16.587646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.967 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.967 [2024-11-25 12:33:16.601348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.967 [2024-11-25 12:33:16.601367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.967 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.967 [2024-11-25 12:33:16.610842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.967 [2024-11-25 12:33:16.610860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.967 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.967 [2024-11-25 12:33:16.624923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.967 [2024-11-25 12:33:16.624942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.967 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.967 [2024-11-25 12:33:16.635094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.967 [2024-11-25 12:33:16.635111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.967 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.967 [2024-11-25 12:33:16.648967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.967 [2024-11-25 12:33:16.648986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.967 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.967 [2024-11-25 12:33:16.659841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.967 [2024-11-25 12:33:16.659860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.967 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.967 [2024-11-25 12:33:16.673555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.967 [2024-11-25 12:33:16.673574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.967 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.967 [2024-11-25 12:33:16.683977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.967 [2024-11-25 12:33:16.683995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.967 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.967 [2024-11-25 12:33:16.697363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.968 [2024-11-25 12:33:16.697381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.968 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.968 [2024-11-25 12:33:16.706792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.968 [2024-11-25 12:33:16.706809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.968 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.968 [2024-11-25 12:33:16.720607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.968 [2024-11-25 12:33:16.720625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.968 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.968 [2024-11-25 12:33:16.731094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.968 [2024-11-25 12:33:16.731113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.968 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.968 [2024-11-25 12:33:16.744990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.968 [2024-11-25 12:33:16.745008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.968 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.968 [2024-11-25 12:33:16.755232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.968 [2024-11-25 12:33:16.755250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.968 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.968 [2024-11-25 12:33:16.769234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.968 [2024-11-25 12:33:16.769253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.968 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.968 [2024-11-25 12:33:16.779104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.968 [2024-11-25 12:33:16.779123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.968 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.968 [2024-11-25 12:33:16.793133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.968 [2024-11-25 12:33:16.793151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.968 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.968 [2024-11-25 12:33:16.804415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.968 [2024-11-25 12:33:16.804433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.968 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.968 [2024-11-25 12:33:16.817559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.968 [2024-11-25 12:33:16.817577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.968 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:20.968 [2024-11-25 12:33:16.827389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:20.968 [2024-11-25 12:33:16.827406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:20.968 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.236 [2024-11-25 12:33:16.841288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.236 [2024-11-25 12:33:16.841307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.236 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.236 [2024-11-25 12:33:16.852728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.236 [2024-11-25 12:33:16.852748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.236 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.236 [2024-11-25 12:33:16.865521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.236 [2024-11-25 12:33:16.865541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.236 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.236 [2024-11-25 12:33:16.877290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.236 [2024-11-25 12:33:16.877310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.236 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.236 [2024-11-25 12:33:16.887521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.236 [2024-11-25 12:33:16.887539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.236 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.236 [2024-11-25 12:33:16.901333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.236 [2024-11-25 12:33:16.901353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.236 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.236 [2024-11-25 12:33:16.911284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.236 [2024-11-25 12:33:16.911302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.236 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.236 [2024-11-25 12:33:16.925009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.236 [2024-11-25 12:33:16.925028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.236 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.236 [2024-11-25 12:33:16.935783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.236 [2024-11-25 12:33:16.935803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.236 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.236 [2024-11-25 12:33:16.949596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.236 [2024-11-25 12:33:16.949616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.236 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.236 [2024-11-25 12:33:16.959012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.236 [2024-11-25 12:33:16.959032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.236 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.237 [2024-11-25 12:33:16.973040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.237 [2024-11-25 12:33:16.973059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.237 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.237 [2024-11-25 12:33:16.982336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.237 [2024-11-25 12:33:16.982354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.237 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.237 [2024-11-25 12:33:16.996240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.237 [2024-11-25 12:33:16.996258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.237 2024/11/25 12:33:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.237 [2024-11-25 12:33:17.003462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.237 [2024-11-25 12:33:17.003480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.237 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.237 [2024-11-25 12:33:17.016949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.237 [2024-11-25 12:33:17.016968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.237 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.237 [2024-11-25 12:33:17.027510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.237 [2024-11-25 12:33:17.027528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.237 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.237 [2024-11-25 12:33:17.041578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.237 [2024-11-25 12:33:17.041598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.237 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.237 [2024-11-25 12:33:17.050761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.237 [2024-11-25 12:33:17.050780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.237 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.237 [2024-11-25 12:33:17.064640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.237 [2024-11-25 12:33:17.064658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.237 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.237 [2024-11-25 12:33:17.075249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.237 [2024-11-25 12:33:17.075267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.237 17444.50 IOPS, 136.29 MiB/s [2024-11-25T12:33:17.113Z] 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.237 [2024-11-25 12:33:17.088805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.237 [2024-11-25 12:33:17.088822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.237 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.237 [2024-11-25 12:33:17.101247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.237 [2024-11-25 12:33:17.101265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.237 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.496 [2024-11-25 12:33:17.113161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.496 [2024-11-25 12:33:17.113178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.496 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.496 [2024-11-25 12:33:17.125104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.496 [2024-11-25 12:33:17.125121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.496 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.496 [2024-11-25 12:33:17.136703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.496 [2024-11-25 12:33:17.136720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.496 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.496 [2024-11-25 12:33:17.150002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.496 [2024-11-25 12:33:17.150019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.496 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.496 [2024-11-25 12:33:17.158483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.496 [2024-11-25 12:33:17.158500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.496 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.496 [2024-11-25 12:33:17.164794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.496 [2024-11-25 12:33:17.164811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.496 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.496 [2024-11-25 12:33:17.175278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.496 [2024-11-25 12:33:17.175296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.496 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.496 [2024-11-25 12:33:17.189117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.496 [2024-11-25 12:33:17.189135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.496 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.496 [2024-11-25 12:33:17.198489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.496 [2024-11-25 12:33:17.198506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.496 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.496 [2024-11-25 12:33:17.212571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.496 [2024-11-25 12:33:17.212588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.496 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.496 [2024-11-25 12:33:17.222954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.496 [2024-11-25 12:33:17.222972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.496 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.496 [2024-11-25 12:33:17.236787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.496 [2024-11-25 12:33:17.236804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.496 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.496 [2024-11-25 12:33:17.252011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.496 [2024-11-25 12:33:17.252029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.496 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.496 [2024-11-25 12:33:17.258340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.496 [2024-11-25 12:33:17.258358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.496 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.496 [2024-11-25 12:33:17.272205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.496 [2024-11-25 12:33:17.272223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.496 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.496 [2024-11-25 12:33:17.278560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.496 [2024-11-25 12:33:17.278578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.496 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.496 [2024-11-25 12:33:17.287460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.497 [2024-11-25 12:33:17.287478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.497 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.497 [2024-11-25 12:33:17.301488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.497 [2024-11-25 12:33:17.301506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.497 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.497 [2024-11-25 12:33:17.310810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.497 [2024-11-25 12:33:17.310828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.497 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.497 [2024-11-25 12:33:17.324550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.497 [2024-11-25 12:33:17.324568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.497 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.497 [2024-11-25 12:33:17.336581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.497 [2024-11-25 12:33:17.336600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.497 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.497 [2024-11-25 12:33:17.349805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.497 [2024-11-25 12:33:17.349823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.497 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.497 [2024-11-25 12:33:17.358914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.497 [2024-11-25 12:33:17.358932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.497 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.756 [2024-11-25 12:33:17.373015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.756 [2024-11-25 12:33:17.373033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.756 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.756 [2024-11-25 12:33:17.383238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.756 [2024-11-25 12:33:17.383256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.756 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.756 [2024-11-25 12:33:17.397003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.756 [2024-11-25 12:33:17.397021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.756 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.756 [2024-11-25 12:33:17.407574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.756 [2024-11-25 12:33:17.407592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.756 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.756 [2024-11-25 12:33:17.421209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.756 [2024-11-25 12:33:17.421226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.756 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.756 [2024-11-25 12:33:17.431665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.756 [2024-11-25 12:33:17.431682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.756 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.756 [2024-11-25 12:33:17.445428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.756 [2024-11-25 12:33:17.445445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.756 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.756 [2024-11-25 12:33:17.454602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.756 [2024-11-25 12:33:17.454619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.756 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.756 [2024-11-25 12:33:17.468415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.756 [2024-11-25 12:33:17.468431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.756 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.756 [2024-11-25 12:33:17.478720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.756 [2024-11-25 12:33:17.478737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.756 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.756 [2024-11-25 12:33:17.484961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.756 [2024-11-25 12:33:17.484978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.756 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.756 [2024-11-25 12:33:17.495234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.756 [2024-11-25 12:33:17.495251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.756 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.756 [2024-11-25 12:33:17.508919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.756 [2024-11-25 12:33:17.508936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.756 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.756 [2024-11-25 12:33:17.519337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.756 [2024-11-25 12:33:17.519354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.756 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.756 [2024-11-25 12:33:17.533049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.756 [2024-11-25 12:33:17.533066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.756 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.756 [2024-11-25 12:33:17.542613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.756 [2024-11-25 12:33:17.542631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.756 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.756 [2024-11-25 12:33:17.557136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.756 [2024-11-25 12:33:17.557155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.756 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.756 [2024-11-25 12:33:17.567582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.756 [2024-11-25 12:33:17.567601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.756 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.756 [2024-11-25 12:33:17.581430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.756 [2024-11-25 12:33:17.581447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.756 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.756 [2024-11-25 12:33:17.593264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.756 [2024-11-25 12:33:17.593281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.756 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.756 [2024-11-25 12:33:17.604641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.756 [2024-11-25 12:33:17.604658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.756 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.756 [2024-11-25 12:33:17.616956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.756 [2024-11-25 12:33:17.616974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:21.756 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:21.756 [2024-11-25 12:33:17.629448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:21.756 [2024-11-25 12:33:17.629465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.016 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.016 [2024-11-25 12:33:17.640010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.016 [2024-11-25 12:33:17.640027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.016 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.016 [2024-11-25 12:33:17.646433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.016 [2024-11-25 12:33:17.646450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.016 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.016 [2024-11-25 12:33:17.654197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.016 [2024-11-25 12:33:17.654214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.016 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.016 [2024-11-25 12:33:17.663198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.016 [2024-11-25 12:33:17.663215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.016 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.016 [2024-11-25 12:33:17.676846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.016 [2024-11-25 12:33:17.676863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.016 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.016 [2024-11-25 12:33:17.689548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.016 [2024-11-25 12:33:17.689565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.016 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.016 [2024-11-25 12:33:17.699926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.016 [2024-11-25 12:33:17.699943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.016 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.016 [2024-11-25 12:33:17.706161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.016 [2024-11-25 12:33:17.706178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.016 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.016 [2024-11-25 12:33:17.716046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.016 [2024-11-25 12:33:17.716064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.016 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.016 [2024-11-25 12:33:17.722148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.016 [2024-11-25 12:33:17.722165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.016 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.016 [2024-11-25 12:33:17.732282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.016 [2024-11-25 12:33:17.732299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.016 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.016 [2024-11-25 12:33:17.745435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.016 [2024-11-25 12:33:17.745453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.016 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.016 [2024-11-25 12:33:17.755433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.016 [2024-11-25 12:33:17.755450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.016 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.016 [2024-11-25 12:33:17.769206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.016 [2024-11-25 12:33:17.769223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.016 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.016 [2024-11-25 12:33:17.780608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.016 [2024-11-25 12:33:17.780625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.016 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.016 [2024-11-25 12:33:17.793702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.016 [2024-11-25 12:33:17.793720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.016 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.016 [2024-11-25 12:33:17.803564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.016 [2024-11-25 12:33:17.803582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.016 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.016 [2024-11-25 12:33:17.817561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.016 [2024-11-25 12:33:17.817578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.016 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.016 [2024-11-25 12:33:17.826998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.017 [2024-11-25 12:33:17.827015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.017 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.017 [2024-11-25 12:33:17.840821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.017 [2024-11-25 12:33:17.840838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.017 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.017 [2024-11-25 12:33:17.851808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.017 [2024-11-25 12:33:17.851826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.017 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.017 [2024-11-25 12:33:17.865721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.017 [2024-11-25 12:33:17.865739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.017 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.017 [2024-11-25 12:33:17.875517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.017 [2024-11-25 12:33:17.875535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.017 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.017 [2024-11-25 12:33:17.889168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.017 [2024-11-25 12:33:17.889185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.275 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.275 [2024-11-25 12:33:17.898952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.275 [2024-11-25 12:33:17.898969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.275 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.275 [2024-11-25 12:33:17.912695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.275 [2024-11-25 12:33:17.912713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.275 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.275 [2024-11-25 12:33:17.924148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.275 [2024-11-25 12:33:17.924166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.275 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.275 [2024-11-25 12:33:17.937243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.275 [2024-11-25 12:33:17.937260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.275 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.275 [2024-11-25 12:33:17.947966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.275 [2024-11-25 12:33:17.947983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.275 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.275 [2024-11-25 12:33:17.961148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.275 [2024-11-25 12:33:17.961164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.275 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.275 [2024-11-25 12:33:17.971427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.275 [2024-11-25 12:33:17.971444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.275 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.275 [2024-11-25 12:33:17.985182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.275 [2024-11-25 12:33:17.985200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.275 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.275 [2024-11-25 12:33:17.994489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.275 [2024-11-25 12:33:17.994507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.275 2024/11/25 12:33:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.275 [2024-11-25 12:33:18.008390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.275 [2024-11-25 12:33:18.008407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.275 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.275 [2024-11-25 12:33:18.020131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.275 [2024-11-25 12:33:18.020149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.275 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.275 [2024-11-25 12:33:18.026955] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.275 [2024-11-25 12:33:18.026972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.275 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.275 [2024-11-25 12:33:18.040190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.275 [2024-11-25 12:33:18.040207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.275 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.275 [2024-11-25 12:33:18.047153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.275 [2024-11-25 12:33:18.047170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.275 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.275 [2024-11-25 12:33:18.060997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.275 [2024-11-25 12:33:18.061015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.276 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.276 [2024-11-25 12:33:18.072723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.276 [2024-11-25 12:33:18.072740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.276 17580.67 IOPS, 137.35 MiB/s [2024-11-25T12:33:18.152Z] 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.276 [2024-11-25 12:33:18.085589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.276 [2024-11-25 12:33:18.085607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.276 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.276 [2024-11-25 12:33:18.097263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.276 [2024-11-25 12:33:18.097280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.276 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.276 [2024-11-25 12:33:18.108913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.276 [2024-11-25 12:33:18.108930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.276 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.276 [2024-11-25 12:33:18.120494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.276 [2024-11-25 12:33:18.120512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.276 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.276 [2024-11-25 12:33:18.133486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.276 [2024-11-25 12:33:18.133505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.276 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.276 [2024-11-25 12:33:18.141850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.276 [2024-11-25 12:33:18.141867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.276 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.534 [2024-11-25 12:33:18.152166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.534 [2024-11-25 12:33:18.152183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.534 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.534 [2024-11-25 12:33:18.158423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.534 [2024-11-25 12:33:18.158440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.534 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.534 [2024-11-25 12:33:18.172200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.534 [2024-11-25 12:33:18.172216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.534 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.534 [2024-11-25 12:33:18.178780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.534 [2024-11-25 12:33:18.178797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.534 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.534 [2024-11-25 12:33:18.192041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.534 [2024-11-25 12:33:18.192057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.534 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.534 [2024-11-25 12:33:18.199157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.534 [2024-11-25 12:33:18.199175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.534 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.534 [2024-11-25 12:33:18.212772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.534 [2024-11-25 12:33:18.212789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.535 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.535 [2024-11-25 12:33:18.225200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.535 [2024-11-25 12:33:18.225216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.535 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.535 [2024-11-25 12:33:18.236737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.535 [2024-11-25 12:33:18.236765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.535 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.535 [2024-11-25 12:33:18.249905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.535 [2024-11-25 12:33:18.249923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.535 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.535 [2024-11-25 12:33:18.258182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.535 [2024-11-25 12:33:18.258199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.535 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.535 [2024-11-25 12:33:18.267664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.535 [2024-11-25 12:33:18.267682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.535 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.535 [2024-11-25 12:33:18.281410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.535 [2024-11-25 12:33:18.281428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.535 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.535 [2024-11-25 12:33:18.292158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.535 [2024-11-25 12:33:18.292175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.535 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.535 [2024-11-25 12:33:18.298401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.535 [2024-11-25 12:33:18.298418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.535 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.535 [2024-11-25 12:33:18.306298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.535 [2024-11-25 12:33:18.306315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.535 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.535 [2024-11-25 12:33:18.315727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.535 [2024-11-25 12:33:18.315745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.535 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.535 [2024-11-25 12:33:18.329525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.535 [2024-11-25 12:33:18.329543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.535 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.535 [2024-11-25 12:33:18.339358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.535 [2024-11-25 12:33:18.339376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.535 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.535 [2024-11-25 12:33:18.353338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.535 [2024-11-25 12:33:18.353356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.535 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.535 [2024-11-25 12:33:18.363790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.535 [2024-11-25 12:33:18.363807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.535 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.535 [2024-11-25 12:33:18.377708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.535 [2024-11-25 12:33:18.377726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.535 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.535 [2024-11-25 12:33:18.387008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.535 [2024-11-25 12:33:18.387026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.535 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.535 [2024-11-25 12:33:18.400815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.535 [2024-11-25 12:33:18.400832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.535 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.794 [2024-11-25 12:33:18.411277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.794 [2024-11-25 12:33:18.411294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.794 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.794 [2024-11-25 12:33:18.424860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.794 [2024-11-25 12:33:18.424877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.794 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.794 [2024-11-25 12:33:18.435315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.794 [2024-11-25 12:33:18.435332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.794 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.794 [2024-11-25 12:33:18.449383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.794 [2024-11-25 12:33:18.449401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.794 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.794 [2024-11-25 12:33:18.458772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.794 [2024-11-25 12:33:18.458789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.794 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.794 [2024-11-25 12:33:18.472667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.794 [2024-11-25 12:33:18.472685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.794 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.794 [2024-11-25 12:33:18.482759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.794 [2024-11-25 12:33:18.482776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.794 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.794 [2024-11-25 12:33:18.496697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.794 [2024-11-25 12:33:18.496715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.794 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.794 [2024-11-25 12:33:18.509009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.794 [2024-11-25 12:33:18.509026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.794 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.794 [2024-11-25 12:33:18.519875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.794 [2024-11-25 12:33:18.519892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.794 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.794 [2024-11-25 12:33:18.533505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.794 [2024-11-25 12:33:18.533522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.794 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.794 [2024-11-25 12:33:18.542742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.794 [2024-11-25 12:33:18.542767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.795 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.795 [2024-11-25 12:33:18.556489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.795 [2024-11-25 12:33:18.556507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.795 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.795 [2024-11-25 12:33:18.571266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.795 [2024-11-25 12:33:18.571283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.795 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.795 [2024-11-25 12:33:18.585058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.795 [2024-11-25 12:33:18.585075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.795 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.795 [2024-11-25 12:33:18.596724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.795 [2024-11-25 12:33:18.596741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.795 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.795 [2024-11-25 12:33:18.609322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.795 [2024-11-25 12:33:18.609339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.795 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.795 [2024-11-25 12:33:18.620100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.795 [2024-11-25 12:33:18.620117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.795 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.795 [2024-11-25 12:33:18.626380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.795 [2024-11-25 12:33:18.626397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.795 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.795 [2024-11-25 12:33:18.634060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.795 [2024-11-25 12:33:18.634077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.795 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.795 [2024-11-25 12:33:18.643266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.795 [2024-11-25 12:33:18.643283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.795 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.795 [2024-11-25 12:33:18.657095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.795 [2024-11-25 12:33:18.657113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:22.795 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:22.795 [2024-11-25 12:33:18.667595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:22.795 [2024-11-25 12:33:18.667613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.053 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.053 [2024-11-25 12:33:18.681046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.053 [2024-11-25 12:33:18.681064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.053 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.053 [2024-11-25 12:33:18.691735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.053 [2024-11-25 12:33:18.691762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.053 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.053 [2024-11-25 12:33:18.705654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.053 [2024-11-25 12:33:18.705672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.053 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.053 [2024-11-25 12:33:18.715238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.053 [2024-11-25 12:33:18.715256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.053 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.053 [2024-11-25 12:33:18.728934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.053 [2024-11-25 12:33:18.728951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.053 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.053 [2024-11-25 12:33:18.738314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.053 [2024-11-25 12:33:18.738331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.053 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.053 [2024-11-25 12:33:18.751961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.053 [2024-11-25 12:33:18.751978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.054 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.054 [2024-11-25 12:33:18.758933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.054 [2024-11-25 12:33:18.758949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.054 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.054 [2024-11-25 12:33:18.772794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.054 [2024-11-25 12:33:18.772811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.054 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.054 [2024-11-25 12:33:18.783896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.054 [2024-11-25 12:33:18.783914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.054 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.054 [2024-11-25 12:33:18.797238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.054 [2024-11-25 12:33:18.797255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.054 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.054 [2024-11-25 12:33:18.808865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.054 [2024-11-25 12:33:18.808883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.054 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.054 [2024-11-25 12:33:18.821467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.054 [2024-11-25 12:33:18.821484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.054 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.054 [2024-11-25 12:33:18.831229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.054 [2024-11-25 12:33:18.831248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.054 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.054 [2024-11-25 12:33:18.844917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.054 [2024-11-25 12:33:18.844934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.054 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.054 [2024-11-25 12:33:18.856095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.054 [2024-11-25 12:33:18.856111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.054 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.054 [2024-11-25 12:33:18.862416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.054 [2024-11-25 12:33:18.862433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.054 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.054 [2024-11-25 12:33:18.869963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.054 [2024-11-25 12:33:18.869980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.054 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.054 [2024-11-25 12:33:18.883392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.054 [2024-11-25 12:33:18.883409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.054 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.054 [2024-11-25 12:33:18.896945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.054 [2024-11-25 12:33:18.896963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.054 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.054 [2024-11-25 12:33:18.908091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.054 [2024-11-25 12:33:18.908108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.054 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.054 [2024-11-25 12:33:18.921397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.054 [2024-11-25 12:33:18.921416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.054 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.312 [2024-11-25 12:33:18.931202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.312 [2024-11-25 12:33:18.931219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.312 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.312 [2024-11-25 12:33:18.945077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.312 [2024-11-25 12:33:18.945096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.312 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.312 [2024-11-25 12:33:18.956272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.312 [2024-11-25 12:33:18.956289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.312 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.312 [2024-11-25 12:33:18.969604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.312 [2024-11-25 12:33:18.969622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.312 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.312 [2024-11-25 12:33:18.980377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.312 [2024-11-25 12:33:18.980394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.312 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.312 [2024-11-25 12:33:18.993871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.312 [2024-11-25 12:33:18.993889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.313 2024/11/25 12:33:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.313 [2024-11-25 12:33:19.008038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.313 [2024-11-25 12:33:19.008055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.313 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.313 [2024-11-25 12:33:19.015193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.313 [2024-11-25 12:33:19.015211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.313 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.313 [2024-11-25 12:33:19.029087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.313 [2024-11-25 12:33:19.029105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.313 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.313 [2024-11-25 12:33:19.041689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.313 [2024-11-25 12:33:19.041707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.313 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.313 [2024-11-25 12:33:19.050840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.313 [2024-11-25 12:33:19.050857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.313 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.313 [2024-11-25 12:33:19.064663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.313 [2024-11-25 12:33:19.064681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.313 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.313 [2024-11-25 12:33:19.074614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.313 [2024-11-25 12:33:19.074632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.313 17662.00 IOPS, 137.98 MiB/s [2024-11-25T12:33:19.189Z] 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.313 [2024-11-25 12:33:19.088388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.313 [2024-11-25 12:33:19.088405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.313 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.313 [2024-11-25 12:33:19.100787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.313 [2024-11-25 12:33:19.100805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.313 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.313 [2024-11-25 12:33:19.113204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.313 [2024-11-25 12:33:19.113221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.313 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.313 [2024-11-25 12:33:19.124972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.313 [2024-11-25 12:33:19.124990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.313 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.313 [2024-11-25 12:33:19.137400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.313 [2024-11-25 12:33:19.137417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.313 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.313 [2024-11-25 12:33:19.149195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.313 [2024-11-25 12:33:19.149212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.313 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.313 [2024-11-25 12:33:19.160477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.313 [2024-11-25 12:33:19.160494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.313 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.313 [2024-11-25 12:33:19.173093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.313 [2024-11-25 12:33:19.173110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.313 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.313 [2024-11-25 12:33:19.185544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.313 [2024-11-25 12:33:19.185561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.571 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.571 [2024-11-25 12:33:19.197414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.571 [2024-11-25 12:33:19.197432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.571 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.571 [2024-11-25 12:33:19.208139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.571 [2024-11-25 12:33:19.208156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.571 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.571 [2024-11-25 12:33:19.214427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.571 [2024-11-25 12:33:19.214446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.571 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.571 [2024-11-25 12:33:19.223577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.571 [2024-11-25 12:33:19.223594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.571 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.571 [2024-11-25 12:33:19.237505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.571 [2024-11-25 12:33:19.237523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.571 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.571 [2024-11-25 12:33:19.251364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.571 [2024-11-25 12:33:19.251382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.571 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.571 [2024-11-25 12:33:19.265004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.571 [2024-11-25 12:33:19.265021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.571 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.571 [2024-11-25 12:33:19.275186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.571 [2024-11-25 12:33:19.275203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.572 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.572 [2024-11-25 12:33:19.289143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.572 [2024-11-25 12:33:19.289160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.572 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.572 [2024-11-25 12:33:19.298658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.572 [2024-11-25 12:33:19.298676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.572 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.572 [2024-11-25 12:33:19.312483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.572 [2024-11-25 12:33:19.312500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.572 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.572 [2024-11-25 12:33:19.324455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.572 [2024-11-25 12:33:19.324473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.572 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.572 [2024-11-25 12:33:19.337791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.572 [2024-11-25 12:33:19.337808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.572 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.572 [2024-11-25 12:33:19.346857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.572 [2024-11-25 12:33:19.346874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.572 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.572 [2024-11-25 12:33:19.360570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.572 [2024-11-25 12:33:19.360587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.572 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.572 [2024-11-25 12:33:19.371034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.572 [2024-11-25 12:33:19.371051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.572 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.572 [2024-11-25 12:33:19.384574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.572 [2024-11-25 12:33:19.384591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.572 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.572 [2024-11-25 12:33:19.395850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.572 [2024-11-25 12:33:19.395868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.572 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.572 [2024-11-25 12:33:19.409592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.572 [2024-11-25 12:33:19.409610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.572 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.572 [2024-11-25 12:33:19.418901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.572 [2024-11-25 12:33:19.418919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.572 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.572 [2024-11-25 12:33:19.432420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.572 [2024-11-25 12:33:19.432437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.572 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.572 [2024-11-25 12:33:19.444467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.572 [2024-11-25 12:33:19.444484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.830 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.830 [2024-11-25 12:33:19.457451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.830 [2024-11-25 12:33:19.457469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.830 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.830 [2024-11-25 12:33:19.466593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.830 [2024-11-25 12:33:19.466612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.830 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.830 [2024-11-25 12:33:19.472807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.830 [2024-11-25 12:33:19.472824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.830 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.830 [2024-11-25 12:33:19.483247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.830 [2024-11-25 12:33:19.483265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.830 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.830 [2024-11-25 12:33:19.496910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.830 [2024-11-25 12:33:19.496927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.830 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.830 [2024-11-25 12:33:19.507709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.830 [2024-11-25 12:33:19.507727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.830 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.830 [2024-11-25 12:33:19.521507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.830 [2024-11-25 12:33:19.521524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.830 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.830 [2024-11-25 12:33:19.531848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.830 [2024-11-25 12:33:19.531866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.830 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.830 [2024-11-25 12:33:19.545606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.830 [2024-11-25 12:33:19.545623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.831 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.831 [2024-11-25 12:33:19.554896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.831 [2024-11-25 12:33:19.554913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.831 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.831 [2024-11-25 12:33:19.568457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.831 [2024-11-25 12:33:19.568474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.831 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.831 [2024-11-25 12:33:19.583396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.831 [2024-11-25 12:33:19.583414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.831 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.831 [2024-11-25 12:33:19.597203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.831 [2024-11-25 12:33:19.597221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.831 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.831 [2024-11-25 12:33:19.607503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.831 [2024-11-25 12:33:19.607520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.831 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.831 [2024-11-25 12:33:19.621210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.831 [2024-11-25 12:33:19.621228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.831 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.831 [2024-11-25 12:33:19.632022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.831 [2024-11-25 12:33:19.632039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.831 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.831 [2024-11-25 12:33:19.638586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.831 [2024-11-25 12:33:19.638604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.831 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.831 [2024-11-25 12:33:19.647736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.831 [2024-11-25 12:33:19.647764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.831 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.831 [2024-11-25 12:33:19.661538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.831 [2024-11-25 12:33:19.661556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.831 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.831 [2024-11-25 12:33:19.670765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.831 [2024-11-25 12:33:19.670782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.831 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.831 [2024-11-25 12:33:19.684799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.831 [2024-11-25 12:33:19.684815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.831 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:23.831 [2024-11-25 12:33:19.695187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:23.831 [2024-11-25 12:33:19.695205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:23.831 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:24.088 [2024-11-25 12:33:19.708997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.088 [2024-11-25 12:33:19.709014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.088 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:24.088 [2024-11-25 12:33:19.719611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.088 [2024-11-25 12:33:19.719628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.088 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:24.088 [2024-11-25 12:33:19.733578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.088 [2024-11-25 12:33:19.733596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.089 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:24.089 [2024-11-25 12:33:19.747440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.089 [2024-11-25 12:33:19.747459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.089 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:24.089 [2024-11-25 12:33:19.760910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.089 [2024-11-25 12:33:19.760927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.089 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:24.089 [2024-11-25 12:33:19.772228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.089 [2024-11-25 12:33:19.772245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.089 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:24.089 [2024-11-25 12:33:19.778401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.089 [2024-11-25 12:33:19.778417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.089 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:24.089 [2024-11-25 12:33:19.786049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.089 [2024-11-25 12:33:19.786065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.089 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:24.089 [2024-11-25 12:33:19.795436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.089 [2024-11-25 12:33:19.795453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.089 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:24.089 [2024-11-25 12:33:19.809056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.089 [2024-11-25 12:33:19.809072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.089 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:24.089 [2024-11-25 12:33:19.819271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.089 [2024-11-25 12:33:19.819288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.089 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:24.089 [2024-11-25 12:33:19.833064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.089 [2024-11-25 12:33:19.833082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.089 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:24.089 [2024-11-25 12:33:19.844293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.089 [2024-11-25 12:33:19.844311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.089 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:24.089 [2024-11-25 12:33:19.857319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.089 [2024-11-25 12:33:19.857336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.089 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:24.089 [2024-11-25 12:33:19.868038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.089 [2024-11-25 12:33:19.868055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.089 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:24.089 [2024-11-25 12:33:19.874127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.089 [2024-11-25 12:33:19.874144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.089 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:24.089 [2024-11-25 12:33:19.882061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.089 [2024-11-25 12:33:19.882078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.089 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:24.089 [2024-11-25 12:33:19.892022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.089 [2024-11-25 12:33:19.892039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.089 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:24.089 [2024-11-25 12:33:19.898249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.089 [2024-11-25 12:33:19.898266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.089 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:24.089 [2024-11-25 12:33:19.907401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.089 [2024-11-25 12:33:19.907418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.089 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:24.089 [2024-11-25 12:33:19.921284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.089 [2024-11-25 12:33:19.921302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.089 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:24.089 [2024-11-25 12:33:19.930506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.089 [2024-11-25 12:33:19.930524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.089 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:24.089 [2024-11-25 12:33:19.944117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.089 [2024-11-25 12:33:19.944135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.089 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:24.089 [2024-11-25 12:33:19.950942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.089 [2024-11-25 12:33:19.950959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.089 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:24.349 [2024-11-25 12:33:19.964439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.349 [2024-11-25 12:33:19.964456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.349 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:24.349 [2024-11-25 12:33:19.975002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.349 [2024-11-25 12:33:19.975019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.349 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:24.349 [2024-11-25 12:33:19.989033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.349 [2024-11-25 12:33:19.989050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.349 2024/11/25 12:33:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:24.349 [2024-11-25 12:33:19.998315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.349 [2024-11-25 12:33:19.998333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.349 2024/11/25 12:33:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:24.349 [2024-11-25 12:33:20.007825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.349 [2024-11-25 12:33:20.007843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.349 2024/11/25 12:33:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:24.349 [2024-11-25 12:33:20.021452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.349 [2024-11-25 12:33:20.021469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.349 2024/11/25 12:33:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:24.349 [2024-11-25 12:33:20.032103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.349 [2024-11-25 12:33:20.032120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.349 2024/11/25 12:33:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:24.349 [2024-11-25 12:33:20.038764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.349 [2024-11-25 12:33:20.038782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.349 2024/11/25 12:33:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:24.349 [2024-11-25 12:33:20.046853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.349 [2024-11-25 12:33:20.046871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.349 2024/11/25 12:33:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:24.349 [2024-11-25 12:33:20.060594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.349 [2024-11-25 12:33:20.060612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.349 2024/11/25 12:33:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:24.349 [2024-11-25 12:33:20.070672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.349 [2024-11-25 12:33:20.070691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.349 2024/11/25 12:33:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:24.349 17706.80 IOPS, 138.33 MiB/s [2024-11-25T12:33:20.225Z] [2024-11-25 12:33:20.081666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.349 [2024-11-25 12:33:20.081684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.349 00:34:24.349 Latency(us) 00:34:24.349 [2024-11-25T12:33:20.225Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:24.349 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:34:24.349 Nvme1n1 : 5.01 17711.40 138.37 0.00 0.00 7220.10 1966.08 12401.43 00:34:24.349 [2024-11-25T12:33:20.225Z] =================================================================================================================== 00:34:24.349 [2024-11-25T12:33:20.225Z] Total : 17711.40 138.37 0.00 0.00 7220.10 1966.08 12401.43 00:34:24.349 2024/11/25 12:33:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:24.349 [2024-11-25 12:33:20.088011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.349 [2024-11-25 12:33:20.088027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.349 2024/11/25 12:33:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:24.349 [2024-11-25 12:33:20.096011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.349 [2024-11-25 12:33:20.096026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.349 2024/11/25 12:33:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:24.349 [2024-11-25 12:33:20.104009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.349 [2024-11-25 12:33:20.104021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.349 2024/11/25 12:33:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:24.349 [2024-11-25 12:33:20.112013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.349 [2024-11-25 12:33:20.112026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.349 2024/11/25 12:33:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:24.349 [2024-11-25 12:33:20.120013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.349 [2024-11-25 12:33:20.120028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.349 2024/11/25 12:33:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:24.349 [2024-11-25 12:33:20.128010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.349 [2024-11-25 12:33:20.128022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.349 2024/11/25 12:33:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:24.349 [2024-11-25 12:33:20.136009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.349 [2024-11-25 12:33:20.136023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.349 2024/11/25 12:33:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:24.349 [2024-11-25 12:33:20.144007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.349 [2024-11-25 12:33:20.144019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.349 2024/11/25 12:33:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:24.349 [2024-11-25 12:33:20.152007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.349 [2024-11-25 12:33:20.152018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.349 2024/11/25 12:33:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:24.350 [2024-11-25 12:33:20.160007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.350 [2024-11-25 12:33:20.160019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.350 2024/11/25 12:33:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:24.350 [2024-11-25 12:33:20.168007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.350 [2024-11-25 12:33:20.168019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.350 2024/11/25 12:33:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:24.350 [2024-11-25 12:33:20.176007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.350 [2024-11-25 12:33:20.176018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.350 2024/11/25 12:33:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:24.350 [2024-11-25 12:33:20.184008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.350 [2024-11-25 12:33:20.184020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.350 2024/11/25 12:33:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:24.350 [2024-11-25 12:33:20.192008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.350 [2024-11-25 12:33:20.192021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.350 2024/11/25 12:33:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:24.350 [2024-11-25 12:33:20.200008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.350 [2024-11-25 12:33:20.200020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.350 2024/11/25 12:33:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:24.350 [2024-11-25 12:33:20.208006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:24.350 [2024-11-25 12:33:20.208018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:24.350 2024/11/25 12:33:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:24.350 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (104362) - No such process 00:34:24.350 12:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 104362 00:34:24.350 12:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:24.350 12:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.350 12:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:24.350 12:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.350 12:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:34:24.350 12:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.350 12:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:24.608 delay0 00:34:24.608 12:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.608 12:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:34:24.608 12:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.608 12:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:24.608 12:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.608 12:33:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:34:24.608 [2024-11-25 12:33:20.386993] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:34:32.719 Initializing NVMe Controllers 00:34:32.719 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:34:32.719 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:32.719 Initialization complete. Launching workers. 00:34:32.719 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 221, failed: 43039 00:34:32.719 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 43108, failed to submit 152 00:34:32.719 success 43043, unsuccessful 65, failed 0 00:34:32.719 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:34:32.719 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:34:32.719 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:32.719 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:34:32.719 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:32.719 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:34:32.719 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:32.719 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:32.719 rmmod nvme_tcp 00:34:32.719 rmmod nvme_fabrics 00:34:32.719 rmmod nvme_keyring 00:34:32.719 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:32.719 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:34:32.719 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:34:32.719 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 104199 ']' 00:34:32.719 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 104199 00:34:32.719 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 104199 ']' 00:34:32.719 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 104199 00:34:32.719 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:34:32.719 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:32.719 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104199 00:34:32.719 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:32.719 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:32.719 killing process with pid 104199 00:34:32.719 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104199' 00:34:32.719 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 104199 00:34:32.719 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 104199 00:34:32.719 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:32.719 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:32.719 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:32.719 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:34:32.719 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:32.719 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:34:32.719 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:34:32.719 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:32.719 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:34:32.719 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:34:32.719 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:34:32.719 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:34:32.719 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:34:32.719 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:34:32.719 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:34:32.719 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:34:32.719 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:34:32.719 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:34:32.719 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:34:32.720 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:34:32.720 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:32.720 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:32.720 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:34:32.720 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:32.720 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:32.720 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:32.720 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:34:32.720 00:34:32.720 real 0m25.135s 00:34:32.720 user 0m40.433s 00:34:32.720 sys 0m6.200s 00:34:32.720 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:32.720 ************************************ 00:34:32.720 END TEST nvmf_zcopy 00:34:32.720 ************************************ 00:34:32.720 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:32.720 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:34:32.720 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:32.720 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:32.720 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:32.720 ************************************ 00:34:32.720 START TEST nvmf_nmic 00:34:32.720 ************************************ 00:34:32.720 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:34:32.720 * Looking for test storage... 00:34:32.720 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:34:32.720 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:32.720 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:34:32.720 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:32.720 12:33:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:32.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:32.720 --rc genhtml_branch_coverage=1 00:34:32.720 --rc genhtml_function_coverage=1 00:34:32.720 --rc genhtml_legend=1 00:34:32.720 --rc geninfo_all_blocks=1 00:34:32.720 --rc geninfo_unexecuted_blocks=1 00:34:32.720 00:34:32.720 ' 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:32.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:32.720 --rc genhtml_branch_coverage=1 00:34:32.720 --rc genhtml_function_coverage=1 00:34:32.720 --rc genhtml_legend=1 00:34:32.720 --rc geninfo_all_blocks=1 00:34:32.720 --rc geninfo_unexecuted_blocks=1 00:34:32.720 00:34:32.720 ' 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:32.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:32.720 --rc genhtml_branch_coverage=1 00:34:32.720 --rc genhtml_function_coverage=1 00:34:32.720 --rc genhtml_legend=1 00:34:32.720 --rc geninfo_all_blocks=1 00:34:32.720 --rc geninfo_unexecuted_blocks=1 00:34:32.720 00:34:32.720 ' 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:32.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:32.720 --rc genhtml_branch_coverage=1 00:34:32.720 --rc genhtml_function_coverage=1 00:34:32.720 --rc genhtml_legend=1 00:34:32.720 --rc geninfo_all_blocks=1 00:34:32.720 --rc geninfo_unexecuted_blocks=1 00:34:32.720 00:34:32.720 ' 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:32.720 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:34:32.721 Cannot find device "nvmf_init_br" 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:34:32.721 Cannot find device "nvmf_init_br2" 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:34:32.721 Cannot find device "nvmf_tgt_br" 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:34:32.721 Cannot find device "nvmf_tgt_br2" 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:34:32.721 Cannot find device "nvmf_init_br" 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:34:32.721 Cannot find device "nvmf_init_br2" 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:34:32.721 Cannot find device "nvmf_tgt_br" 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:34:32.721 Cannot find device "nvmf_tgt_br2" 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:34:32.721 Cannot find device "nvmf_br" 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:34:32.721 Cannot find device "nvmf_init_if" 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:34:32.721 Cannot find device "nvmf_init_if2" 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:32.721 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:32.721 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:34:32.721 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:34:32.722 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:34:32.722 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:34:32.722 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:34:32.722 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:34:32.722 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:34:32.722 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:34:32.722 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:34:32.722 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:34:32.722 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:34:32.722 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:34:32.722 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:34:32.722 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:34:32.722 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:34:32.722 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:34:32.722 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:34:32.722 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:34:32.722 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:34:32.722 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:34:32.722 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:34:32.722 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:34:32.722 00:34:32.722 --- 10.0.0.3 ping statistics --- 00:34:32.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:32.722 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:34:32.722 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:34:32.722 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:34:32.722 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:34:32.722 00:34:32.722 --- 10.0.0.4 ping statistics --- 00:34:32.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:32.722 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:34:32.722 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:34:32.722 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:32.722 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.015 ms 00:34:32.722 00:34:32.722 --- 10.0.0.1 ping statistics --- 00:34:32.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:32.722 rtt min/avg/max/mdev = 0.015/0.015/0.015/0.000 ms 00:34:32.722 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:34:32.722 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:32.722 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:34:32.722 00:34:32.722 --- 10.0.0.2 ping statistics --- 00:34:32.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:32.722 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:34:32.722 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:32.722 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:34:32.722 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:32.722 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:32.722 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:32.722 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:32.722 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:32.722 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:32.722 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:32.722 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:34:32.722 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:32.722 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:32.722 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:32.722 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=104745 00:34:32.722 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 104745 00:34:32.722 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 104745 ']' 00:34:32.722 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:32.722 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:32.722 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:32.722 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:32.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:32.722 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:32.722 12:33:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:32.722 [2024-11-25 12:33:28.405578] thread.c:3055:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:32.722 [2024-11-25 12:33:28.406376] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:34:32.722 [2024-11-25 12:33:28.406420] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:32.722 [2024-11-25 12:33:28.547978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:32.983 [2024-11-25 12:33:28.604428] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:32.983 [2024-11-25 12:33:28.604482] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:32.983 [2024-11-25 12:33:28.604489] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:32.983 [2024-11-25 12:33:28.604494] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:32.983 [2024-11-25 12:33:28.604499] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:32.983 [2024-11-25 12:33:28.605447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:32.983 [2024-11-25 12:33:28.605788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:32.983 [2024-11-25 12:33:28.606265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:32.983 [2024-11-25 12:33:28.606271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:32.983 [2024-11-25 12:33:28.682964] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:32.983 [2024-11-25 12:33:28.684153] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:32.983 [2024-11-25 12:33:28.684278] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:32.983 [2024-11-25 12:33:28.684637] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:32.983 [2024-11-25 12:33:28.685158] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:33.553 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:33.553 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:34:33.553 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:33.553 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:33.553 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:33.553 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:33.553 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:33.553 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.553 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:33.553 [2024-11-25 12:33:29.319201] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:33.553 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.553 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:33.553 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.553 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:33.553 Malloc0 00:34:33.553 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.553 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:33.553 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.553 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:33.553 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.553 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:33.553 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.553 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:33.553 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.553 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:34:33.553 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.553 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:33.553 [2024-11-25 12:33:29.387129] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:34:33.553 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.553 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:34:33.553 test case1: single bdev can't be used in multiple subsystems 00:34:33.553 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:34:33.553 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.553 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:33.553 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.553 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:34:33.553 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.553 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:33.553 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.553 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:34:33.553 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:34:33.553 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.553 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:33.553 [2024-11-25 12:33:29.410920] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:34:33.553 [2024-11-25 12:33:29.410947] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:34:33.553 [2024-11-25 12:33:29.410954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:33.553 2024/11/25 12:33:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:34:33.553 request: 00:34:33.553 { 00:34:33.553 "method": "nvmf_subsystem_add_ns", 00:34:33.553 "params": { 00:34:33.553 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:34:33.553 "namespace": { 00:34:33.553 "bdev_name": "Malloc0", 00:34:33.553 "no_auto_visible": false 00:34:33.553 } 00:34:33.553 } 00:34:33.553 } 00:34:33.553 Got JSON-RPC error response 00:34:33.553 GoRPCClient: error on JSON-RPC call 00:34:33.553 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:33.553 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:34:33.553 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:34:33.553 Adding namespace failed - expected result. 00:34:33.553 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:34:33.553 test case2: host connect to nvmf target in multiple paths 00:34:33.553 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:34:33.553 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:34:33.553 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.553 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:33.553 [2024-11-25 12:33:29.422991] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:34:33.553 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.553 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid=5934b511-67a7-4c73-84f0-52fdf36bb79d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:34:33.811 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid=5934b511-67a7-4c73-84f0-52fdf36bb79d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:34:33.811 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:34:33.811 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:34:33.811 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:34:33.811 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:34:33.811 12:33:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:34:35.720 12:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:34:35.720 12:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:34:35.720 12:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:34:35.720 12:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:34:35.720 12:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:34:35.720 12:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:34:35.720 12:33:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:35.978 [global] 00:34:35.978 thread=1 00:34:35.978 invalidate=1 00:34:35.978 rw=write 00:34:35.978 time_based=1 00:34:35.978 runtime=1 00:34:35.978 ioengine=libaio 00:34:35.978 direct=1 00:34:35.978 bs=4096 00:34:35.978 iodepth=1 00:34:35.978 norandommap=0 00:34:35.978 numjobs=1 00:34:35.978 00:34:35.978 verify_dump=1 00:34:35.978 verify_backlog=512 00:34:35.978 verify_state_save=0 00:34:35.978 do_verify=1 00:34:35.978 verify=crc32c-intel 00:34:35.978 [job0] 00:34:35.978 filename=/dev/nvme0n1 00:34:35.978 Could not set queue depth (nvme0n1) 00:34:35.978 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:35.978 fio-3.35 00:34:35.978 Starting 1 thread 00:34:37.388 00:34:37.388 job0: (groupid=0, jobs=1): err= 0: pid=104849: Mon Nov 25 12:33:32 2024 00:34:37.388 read: IOPS=4369, BW=17.1MiB/s (17.9MB/s)(17.1MiB/1001msec) 00:34:37.388 slat (nsec): min=6287, max=67137, avg=7300.86, stdev=3271.28 00:34:37.388 clat (usec): min=92, max=392, avg=117.93, stdev=14.82 00:34:37.388 lat (usec): min=100, max=399, avg=125.23, stdev=16.10 00:34:37.388 clat percentiles (usec): 00:34:37.388 | 1.00th=[ 99], 5.00th=[ 103], 10.00th=[ 106], 20.00th=[ 109], 00:34:37.388 | 30.00th=[ 112], 40.00th=[ 113], 50.00th=[ 115], 60.00th=[ 118], 00:34:37.388 | 70.00th=[ 121], 80.00th=[ 126], 90.00th=[ 137], 95.00th=[ 143], 00:34:37.388 | 99.00th=[ 157], 99.50th=[ 167], 99.90th=[ 253], 99.95th=[ 351], 00:34:37.388 | 99.99th=[ 392] 00:34:37.388 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:34:37.388 slat (usec): min=9, max=101, avg=11.77, stdev= 5.85 00:34:37.388 clat (usec): min=62, max=371, avg=84.66, stdev=12.45 00:34:37.388 lat (usec): min=73, max=401, avg=96.43, stdev=15.12 00:34:37.388 clat percentiles (usec): 00:34:37.388 | 1.00th=[ 70], 5.00th=[ 73], 10.00th=[ 75], 20.00th=[ 77], 00:34:37.388 | 30.00th=[ 79], 40.00th=[ 81], 50.00th=[ 83], 60.00th=[ 85], 00:34:37.388 | 70.00th=[ 88], 80.00th=[ 93], 90.00th=[ 99], 95.00th=[ 103], 00:34:37.388 | 99.00th=[ 112], 99.50th=[ 117], 99.90th=[ 223], 99.95th=[ 269], 00:34:37.388 | 99.99th=[ 371] 00:34:37.388 bw ( KiB/s): min=17880, max=17880, per=97.10%, avg=17880.00, stdev= 0.00, samples=1 00:34:37.388 iops : min= 4470, max= 4470, avg=4470.00, stdev= 0.00, samples=1 00:34:37.388 lat (usec) : 100=48.09%, 250=51.81%, 500=0.10% 00:34:37.388 cpu : usr=1.50%, sys=6.50%, ctx=8983, majf=0, minf=5 00:34:37.388 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:37.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:37.388 issued rwts: total=4374,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:37.388 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:37.388 00:34:37.388 Run status group 0 (all jobs): 00:34:37.388 READ: bw=17.1MiB/s (17.9MB/s), 17.1MiB/s-17.1MiB/s (17.9MB/s-17.9MB/s), io=17.1MiB (17.9MB), run=1001-1001msec 00:34:37.388 WRITE: bw=18.0MiB/s (18.9MB/s), 18.0MiB/s-18.0MiB/s (18.9MB/s-18.9MB/s), io=18.0MiB (18.9MB), run=1001-1001msec 00:34:37.388 00:34:37.388 Disk stats (read/write): 00:34:37.388 nvme0n1: ios=3975/4096, merge=0/0, ticks=488/368, in_queue=856, util=91.38% 00:34:37.388 12:33:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:37.388 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:34:37.388 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:37.388 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:34:37.388 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:34:37.388 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:37.388 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:34:37.388 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:37.388 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:34:37.388 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:34:37.388 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:34:37.388 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:37.388 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:34:37.388 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:37.388 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:34:37.388 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:37.388 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:37.388 rmmod nvme_tcp 00:34:37.388 rmmod nvme_fabrics 00:34:37.388 rmmod nvme_keyring 00:34:37.388 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:37.388 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:34:37.388 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:34:37.388 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 104745 ']' 00:34:37.388 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 104745 00:34:37.388 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 104745 ']' 00:34:37.388 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 104745 00:34:37.388 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:34:37.388 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:37.388 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104745 00:34:37.388 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:37.388 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:37.388 killing process with pid 104745 00:34:37.388 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104745' 00:34:37.388 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 104745 00:34:37.388 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 104745 00:34:37.646 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:37.646 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:37.646 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:37.646 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:34:37.646 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:34:37.646 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:37.646 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:34:37.646 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:37.646 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:34:37.647 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:34:37.647 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:34:37.647 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:34:37.647 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:34:37.647 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:34:37.647 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:34:37.647 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:34:37.647 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:34:37.647 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:34:37.647 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:34:37.647 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:34:37.647 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:37.647 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:37.647 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:34:37.647 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:37.647 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:37.647 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:34:37.906 00:34:37.906 real 0m5.670s 00:34:37.906 user 0m15.165s 00:34:37.906 sys 0m1.404s 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:37.906 ************************************ 00:34:37.906 END TEST nvmf_nmic 00:34:37.906 ************************************ 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:37.906 ************************************ 00:34:37.906 START TEST nvmf_fio_target 00:34:37.906 ************************************ 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:37.906 * Looking for test storage... 00:34:37.906 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:37.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:37.906 --rc genhtml_branch_coverage=1 00:34:37.906 --rc genhtml_function_coverage=1 00:34:37.906 --rc genhtml_legend=1 00:34:37.906 --rc geninfo_all_blocks=1 00:34:37.906 --rc geninfo_unexecuted_blocks=1 00:34:37.906 00:34:37.906 ' 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:37.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:37.906 --rc genhtml_branch_coverage=1 00:34:37.906 --rc genhtml_function_coverage=1 00:34:37.906 --rc genhtml_legend=1 00:34:37.906 --rc geninfo_all_blocks=1 00:34:37.906 --rc geninfo_unexecuted_blocks=1 00:34:37.906 00:34:37.906 ' 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:37.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:37.906 --rc genhtml_branch_coverage=1 00:34:37.906 --rc genhtml_function_coverage=1 00:34:37.906 --rc genhtml_legend=1 00:34:37.906 --rc geninfo_all_blocks=1 00:34:37.906 --rc geninfo_unexecuted_blocks=1 00:34:37.906 00:34:37.906 ' 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:37.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:37.906 --rc genhtml_branch_coverage=1 00:34:37.906 --rc genhtml_function_coverage=1 00:34:37.906 --rc genhtml_legend=1 00:34:37.906 --rc geninfo_all_blocks=1 00:34:37.906 --rc geninfo_unexecuted_blocks=1 00:34:37.906 00:34:37.906 ' 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:37.906 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:34:37.907 Cannot find device "nvmf_init_br" 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:34:37.907 Cannot find device "nvmf_init_br2" 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:34:37.907 Cannot find device "nvmf_tgt_br" 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:34:37.907 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:34:38.166 Cannot find device "nvmf_tgt_br2" 00:34:38.166 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:34:38.166 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:34:38.166 Cannot find device "nvmf_init_br" 00:34:38.166 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:34:38.166 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:34:38.166 Cannot find device "nvmf_init_br2" 00:34:38.166 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:34:38.166 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:34:38.166 Cannot find device "nvmf_tgt_br" 00:34:38.166 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:34:38.166 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:34:38.166 Cannot find device "nvmf_tgt_br2" 00:34:38.166 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:34:38.166 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:34:38.166 Cannot find device "nvmf_br" 00:34:38.166 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:34:38.166 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:34:38.166 Cannot find device "nvmf_init_if" 00:34:38.166 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:34:38.166 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:34:38.166 Cannot find device "nvmf_init_if2" 00:34:38.166 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:34:38.166 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:38.166 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:38.166 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:34:38.166 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:38.166 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:38.166 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:34:38.166 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:34:38.166 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:34:38.166 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:34:38.166 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:34:38.166 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:34:38.166 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:34:38.166 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:34:38.166 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:34:38.166 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:34:38.166 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:34:38.166 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:34:38.166 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:34:38.166 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:34:38.166 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:34:38.166 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:34:38.166 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:34:38.166 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:34:38.166 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:34:38.166 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:34:38.166 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:34:38.166 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:34:38.166 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:34:38.166 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:34:38.166 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:34:38.166 12:33:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:34:38.166 12:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:34:38.166 12:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:34:38.166 12:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:34:38.166 12:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:34:38.166 12:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:34:38.166 12:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:34:38.166 12:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:34:38.166 12:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:34:38.166 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:34:38.166 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:34:38.166 00:34:38.166 --- 10.0.0.3 ping statistics --- 00:34:38.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:38.166 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:34:38.166 12:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:34:38.166 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:34:38.166 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.029 ms 00:34:38.166 00:34:38.166 --- 10.0.0.4 ping statistics --- 00:34:38.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:38.166 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:34:38.166 12:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:34:38.166 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:38.166 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.013 ms 00:34:38.166 00:34:38.166 --- 10.0.0.1 ping statistics --- 00:34:38.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:38.166 rtt min/avg/max/mdev = 0.013/0.013/0.013/0.000 ms 00:34:38.166 12:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:34:38.166 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:38.166 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:34:38.166 00:34:38.166 --- 10.0.0.2 ping statistics --- 00:34:38.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:38.166 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:34:38.167 12:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:38.167 12:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:34:38.167 12:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:38.167 12:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:38.167 12:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:38.167 12:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:38.167 12:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:38.167 12:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:38.167 12:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:38.425 12:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:34:38.425 12:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:38.425 12:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:38.425 12:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:38.425 12:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=105078 00:34:38.425 12:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 105078 00:34:38.425 12:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:38.425 12:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 105078 ']' 00:34:38.425 12:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:38.425 12:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:38.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:38.425 12:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:38.425 12:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:38.425 12:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:38.425 [2024-11-25 12:33:34.096353] thread.c:3055:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:38.425 [2024-11-25 12:33:34.097122] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:34:38.425 [2024-11-25 12:33:34.097162] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:38.425 [2024-11-25 12:33:34.234259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:38.425 [2024-11-25 12:33:34.278485] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:38.425 [2024-11-25 12:33:34.278537] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:38.425 [2024-11-25 12:33:34.278545] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:38.425 [2024-11-25 12:33:34.278550] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:38.425 [2024-11-25 12:33:34.278555] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:38.425 [2024-11-25 12:33:34.279306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:38.425 [2024-11-25 12:33:34.279370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:38.425 [2024-11-25 12:33:34.279420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:38.425 [2024-11-25 12:33:34.279413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:38.685 [2024-11-25 12:33:34.353667] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:38.685 [2024-11-25 12:33:34.354193] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:38.685 [2024-11-25 12:33:34.354509] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:38.685 [2024-11-25 12:33:34.354830] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:38.685 [2024-11-25 12:33:34.355332] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:39.254 12:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:39.254 12:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:34:39.254 12:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:39.254 12:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:39.254 12:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:39.254 12:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:39.254 12:33:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:39.254 [2024-11-25 12:33:35.128265] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:39.515 12:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:39.777 12:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:34:39.777 12:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:39.777 12:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:34:39.777 12:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:40.037 12:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:34:40.037 12:33:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:40.298 12:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:34:40.298 12:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:34:40.559 12:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:40.820 12:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:34:40.820 12:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:41.095 12:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:34:41.095 12:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:41.095 12:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:34:41.095 12:33:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:34:41.353 12:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:41.610 12:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:41.610 12:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:41.869 12:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:41.869 12:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:34:41.869 12:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:34:42.126 [2024-11-25 12:33:37.916200] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:34:42.126 12:33:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:34:42.383 12:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:34:42.640 12:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid=5934b511-67a7-4c73-84f0-52fdf36bb79d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:34:42.640 12:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:34:42.640 12:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:34:42.640 12:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:34:42.640 12:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:34:42.640 12:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:34:42.640 12:33:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:34:45.163 12:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:34:45.163 12:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:34:45.163 12:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:34:45.163 12:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:34:45.163 12:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:34:45.163 12:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:34:45.163 12:33:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:45.163 [global] 00:34:45.163 thread=1 00:34:45.163 invalidate=1 00:34:45.163 rw=write 00:34:45.163 time_based=1 00:34:45.163 runtime=1 00:34:45.163 ioengine=libaio 00:34:45.163 direct=1 00:34:45.163 bs=4096 00:34:45.163 iodepth=1 00:34:45.163 norandommap=0 00:34:45.163 numjobs=1 00:34:45.163 00:34:45.163 verify_dump=1 00:34:45.163 verify_backlog=512 00:34:45.163 verify_state_save=0 00:34:45.163 do_verify=1 00:34:45.163 verify=crc32c-intel 00:34:45.163 [job0] 00:34:45.163 filename=/dev/nvme0n1 00:34:45.163 [job1] 00:34:45.163 filename=/dev/nvme0n2 00:34:45.163 [job2] 00:34:45.163 filename=/dev/nvme0n3 00:34:45.163 [job3] 00:34:45.163 filename=/dev/nvme0n4 00:34:45.163 Could not set queue depth (nvme0n1) 00:34:45.163 Could not set queue depth (nvme0n2) 00:34:45.163 Could not set queue depth (nvme0n3) 00:34:45.163 Could not set queue depth (nvme0n4) 00:34:45.163 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:45.163 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:45.163 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:45.163 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:45.163 fio-3.35 00:34:45.163 Starting 4 threads 00:34:46.096 00:34:46.096 job0: (groupid=0, jobs=1): err= 0: pid=105364: Mon Nov 25 12:33:41 2024 00:34:46.096 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:34:46.096 slat (nsec): min=3836, max=92380, avg=6401.91, stdev=3487.51 00:34:46.096 clat (usec): min=102, max=475, avg=171.78, stdev=26.68 00:34:46.097 lat (usec): min=107, max=480, avg=178.18, stdev=27.33 00:34:46.097 clat percentiles (usec): 00:34:46.097 | 1.00th=[ 110], 5.00th=[ 121], 10.00th=[ 153], 20.00th=[ 159], 00:34:46.097 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 172], 00:34:46.097 | 70.00th=[ 178], 80.00th=[ 184], 90.00th=[ 204], 95.00th=[ 221], 00:34:46.097 | 99.00th=[ 245], 99.50th=[ 265], 99.90th=[ 371], 99.95th=[ 400], 00:34:46.097 | 99.99th=[ 478] 00:34:46.097 write: IOPS=3267, BW=12.8MiB/s (13.4MB/s)(12.8MiB/1001msec); 0 zone resets 00:34:46.097 slat (nsec): min=5786, max=65263, avg=10216.90, stdev=4290.58 00:34:46.097 clat (usec): min=69, max=448, avg=126.56, stdev=15.04 00:34:46.097 lat (usec): min=81, max=458, avg=136.78, stdev=14.76 00:34:46.097 clat percentiles (usec): 00:34:46.097 | 1.00th=[ 105], 5.00th=[ 114], 10.00th=[ 116], 20.00th=[ 119], 00:34:46.097 | 30.00th=[ 121], 40.00th=[ 123], 50.00th=[ 125], 60.00th=[ 127], 00:34:46.097 | 70.00th=[ 130], 80.00th=[ 135], 90.00th=[ 141], 95.00th=[ 147], 00:34:46.097 | 99.00th=[ 167], 99.50th=[ 182], 99.90th=[ 314], 99.95th=[ 400], 00:34:46.097 | 99.99th=[ 449] 00:34:46.097 bw ( KiB/s): min=13184, max=13184, per=21.06%, avg=13184.00, stdev= 0.00, samples=1 00:34:46.097 iops : min= 3296, max= 3296, avg=3296.00, stdev= 0.00, samples=1 00:34:46.097 lat (usec) : 100=0.28%, 250=99.27%, 500=0.44% 00:34:46.097 cpu : usr=1.40%, sys=3.90%, ctx=6345, majf=0, minf=11 00:34:46.097 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:46.097 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:46.097 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:46.097 issued rwts: total=3072,3271,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:46.097 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:46.097 job1: (groupid=0, jobs=1): err= 0: pid=105365: Mon Nov 25 12:33:41 2024 00:34:46.097 read: IOPS=4300, BW=16.8MiB/s (17.6MB/s)(16.8MiB/1001msec) 00:34:46.097 slat (nsec): min=6172, max=59840, avg=6936.74, stdev=1417.02 00:34:46.097 clat (usec): min=92, max=1559, avg=117.51, stdev=25.44 00:34:46.097 lat (usec): min=99, max=1566, avg=124.45, stdev=25.48 00:34:46.097 clat percentiles (usec): 00:34:46.097 | 1.00th=[ 101], 5.00th=[ 105], 10.00th=[ 108], 20.00th=[ 111], 00:34:46.097 | 30.00th=[ 112], 40.00th=[ 114], 50.00th=[ 116], 60.00th=[ 118], 00:34:46.097 | 70.00th=[ 120], 80.00th=[ 123], 90.00th=[ 129], 95.00th=[ 137], 00:34:46.097 | 99.00th=[ 153], 99.50th=[ 165], 99.90th=[ 255], 99.95th=[ 293], 00:34:46.097 | 99.99th=[ 1565] 00:34:46.097 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:34:46.097 slat (nsec): min=9821, max=74657, avg=11444.84, stdev=4458.99 00:34:46.097 clat (usec): min=64, max=373, avg=87.68, stdev=11.58 00:34:46.097 lat (usec): min=79, max=383, avg=99.12, stdev=12.99 00:34:46.097 clat percentiles (usec): 00:34:46.097 | 1.00th=[ 74], 5.00th=[ 77], 10.00th=[ 79], 20.00th=[ 81], 00:34:46.097 | 30.00th=[ 83], 40.00th=[ 85], 50.00th=[ 86], 60.00th=[ 88], 00:34:46.097 | 70.00th=[ 90], 80.00th=[ 93], 90.00th=[ 99], 95.00th=[ 105], 00:34:46.097 | 99.00th=[ 123], 99.50th=[ 131], 99.90th=[ 202], 99.95th=[ 260], 00:34:46.097 | 99.99th=[ 375] 00:34:46.097 bw ( KiB/s): min=19552, max=19552, per=31.23%, avg=19552.00, stdev= 0.00, samples=1 00:34:46.097 iops : min= 4888, max= 4888, avg=4888.00, stdev= 0.00, samples=1 00:34:46.097 lat (usec) : 100=47.53%, 250=52.38%, 500=0.07%, 750=0.01% 00:34:46.097 lat (msec) : 2=0.01% 00:34:46.097 cpu : usr=1.30%, sys=6.40%, ctx=8913, majf=0, minf=5 00:34:46.097 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:46.097 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:46.097 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:46.097 issued rwts: total=4305,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:46.097 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:46.097 job2: (groupid=0, jobs=1): err= 0: pid=105366: Mon Nov 25 12:33:41 2024 00:34:46.097 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:34:46.097 slat (nsec): min=6223, max=60095, avg=7257.56, stdev=2742.29 00:34:46.097 clat (usec): min=99, max=774, avg=121.45, stdev=19.19 00:34:46.097 lat (usec): min=106, max=781, avg=128.70, stdev=19.72 00:34:46.097 clat percentiles (usec): 00:34:46.097 | 1.00th=[ 105], 5.00th=[ 109], 10.00th=[ 111], 20.00th=[ 114], 00:34:46.097 | 30.00th=[ 116], 40.00th=[ 117], 50.00th=[ 119], 60.00th=[ 122], 00:34:46.097 | 70.00th=[ 124], 80.00th=[ 128], 90.00th=[ 135], 95.00th=[ 143], 00:34:46.097 | 99.00th=[ 163], 99.50th=[ 169], 99.90th=[ 245], 99.95th=[ 506], 00:34:46.097 | 99.99th=[ 775] 00:34:46.097 write: IOPS=4509, BW=17.6MiB/s (18.5MB/s)(17.6MiB/1001msec); 0 zone resets 00:34:46.097 slat (nsec): min=9454, max=96545, avg=11480.83, stdev=5136.74 00:34:46.097 clat (usec): min=72, max=357, avg=91.64, stdev=12.88 00:34:46.097 lat (usec): min=82, max=390, avg=103.12, stdev=14.91 00:34:46.097 clat percentiles (usec): 00:34:46.097 | 1.00th=[ 77], 5.00th=[ 80], 10.00th=[ 82], 20.00th=[ 84], 00:34:46.097 | 30.00th=[ 86], 40.00th=[ 88], 50.00th=[ 90], 60.00th=[ 91], 00:34:46.097 | 70.00th=[ 94], 80.00th=[ 97], 90.00th=[ 104], 95.00th=[ 111], 00:34:46.097 | 99.00th=[ 131], 99.50th=[ 145], 99.90th=[ 277], 99.95th=[ 289], 00:34:46.097 | 99.99th=[ 359] 00:34:46.097 bw ( KiB/s): min=17784, max=17784, per=28.41%, avg=17784.00, stdev= 0.00, samples=1 00:34:46.097 iops : min= 4446, max= 4446, avg=4446.00, stdev= 0.00, samples=1 00:34:46.097 lat (usec) : 100=44.95%, 250=54.94%, 500=0.08%, 750=0.02%, 1000=0.01% 00:34:46.097 cpu : usr=1.10%, sys=6.60%, ctx=8611, majf=0, minf=15 00:34:46.097 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:46.097 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:46.097 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:46.097 issued rwts: total=4096,4514,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:46.097 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:46.097 job3: (groupid=0, jobs=1): err= 0: pid=105367: Mon Nov 25 12:33:41 2024 00:34:46.097 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:34:46.097 slat (nsec): min=3901, max=51929, avg=6508.50, stdev=3486.13 00:34:46.097 clat (usec): min=108, max=520, avg=171.66, stdev=25.50 00:34:46.097 lat (usec): min=114, max=539, avg=178.17, stdev=26.20 00:34:46.097 clat percentiles (usec): 00:34:46.097 | 1.00th=[ 117], 5.00th=[ 127], 10.00th=[ 155], 20.00th=[ 159], 00:34:46.097 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 172], 00:34:46.097 | 70.00th=[ 178], 80.00th=[ 184], 90.00th=[ 200], 95.00th=[ 215], 00:34:46.097 | 99.00th=[ 237], 99.50th=[ 255], 99.90th=[ 379], 99.95th=[ 482], 00:34:46.097 | 99.99th=[ 523] 00:34:46.097 write: IOPS=3268, BW=12.8MiB/s (13.4MB/s)(12.8MiB/1001msec); 0 zone resets 00:34:46.097 slat (nsec): min=5994, max=78423, avg=10704.28, stdev=4248.71 00:34:46.097 clat (usec): min=74, max=476, avg=126.04, stdev=15.11 00:34:46.097 lat (usec): min=94, max=486, avg=136.74, stdev=14.89 00:34:46.097 clat percentiles (usec): 00:34:46.097 | 1.00th=[ 106], 5.00th=[ 114], 10.00th=[ 116], 20.00th=[ 119], 00:34:46.097 | 30.00th=[ 121], 40.00th=[ 122], 50.00th=[ 124], 60.00th=[ 127], 00:34:46.097 | 70.00th=[ 129], 80.00th=[ 133], 90.00th=[ 139], 95.00th=[ 145], 00:34:46.097 | 99.00th=[ 167], 99.50th=[ 174], 99.90th=[ 326], 99.95th=[ 429], 00:34:46.097 | 99.99th=[ 478] 00:34:46.097 bw ( KiB/s): min=13184, max=13184, per=21.06%, avg=13184.00, stdev= 0.00, samples=1 00:34:46.097 iops : min= 3296, max= 3296, avg=3296.00, stdev= 0.00, samples=1 00:34:46.097 lat (usec) : 100=0.27%, 250=99.35%, 500=0.36%, 750=0.02% 00:34:46.097 cpu : usr=0.90%, sys=4.60%, ctx=6347, majf=0, minf=15 00:34:46.097 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:46.097 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:46.097 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:46.097 issued rwts: total=3072,3272,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:46.097 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:46.097 00:34:46.097 Run status group 0 (all jobs): 00:34:46.097 READ: bw=56.8MiB/s (59.5MB/s), 12.0MiB/s-16.8MiB/s (12.6MB/s-17.6MB/s), io=56.8MiB (59.6MB), run=1001-1001msec 00:34:46.097 WRITE: bw=61.1MiB/s (64.1MB/s), 12.8MiB/s-18.0MiB/s (13.4MB/s-18.9MB/s), io=61.2MiB (64.2MB), run=1001-1001msec 00:34:46.097 00:34:46.097 Disk stats (read/write): 00:34:46.097 nvme0n1: ios=2640/3072, merge=0/0, ticks=453/394, in_queue=847, util=89.68% 00:34:46.097 nvme0n2: ios=3825/4096, merge=0/0, ticks=466/380, in_queue=846, util=89.93% 00:34:46.097 nvme0n3: ios=3611/4025, merge=0/0, ticks=453/393, in_queue=846, util=90.05% 00:34:46.097 nvme0n4: ios=2611/3072, merge=0/0, ticks=458/401, in_queue=859, util=90.22% 00:34:46.097 12:33:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:34:46.097 [global] 00:34:46.097 thread=1 00:34:46.097 invalidate=1 00:34:46.097 rw=randwrite 00:34:46.097 time_based=1 00:34:46.097 runtime=1 00:34:46.097 ioengine=libaio 00:34:46.097 direct=1 00:34:46.097 bs=4096 00:34:46.097 iodepth=1 00:34:46.097 norandommap=0 00:34:46.097 numjobs=1 00:34:46.097 00:34:46.097 verify_dump=1 00:34:46.097 verify_backlog=512 00:34:46.097 verify_state_save=0 00:34:46.097 do_verify=1 00:34:46.097 verify=crc32c-intel 00:34:46.097 [job0] 00:34:46.097 filename=/dev/nvme0n1 00:34:46.097 [job1] 00:34:46.097 filename=/dev/nvme0n2 00:34:46.097 [job2] 00:34:46.097 filename=/dev/nvme0n3 00:34:46.097 [job3] 00:34:46.097 filename=/dev/nvme0n4 00:34:46.097 Could not set queue depth (nvme0n1) 00:34:46.097 Could not set queue depth (nvme0n2) 00:34:46.097 Could not set queue depth (nvme0n3) 00:34:46.097 Could not set queue depth (nvme0n4) 00:34:46.097 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:46.097 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:46.097 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:46.097 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:46.097 fio-3.35 00:34:46.097 Starting 4 threads 00:34:47.467 00:34:47.467 job0: (groupid=0, jobs=1): err= 0: pid=105421: Mon Nov 25 12:33:43 2024 00:34:47.467 read: IOPS=2777, BW=10.8MiB/s (11.4MB/s)(10.9MiB/1001msec) 00:34:47.467 slat (nsec): min=6432, max=59160, avg=7552.84, stdev=2949.30 00:34:47.467 clat (usec): min=97, max=751, avg=183.57, stdev=36.58 00:34:47.467 lat (usec): min=104, max=758, avg=191.13, stdev=37.03 00:34:47.467 clat percentiles (usec): 00:34:47.467 | 1.00th=[ 109], 5.00th=[ 122], 10.00th=[ 129], 20.00th=[ 163], 00:34:47.467 | 30.00th=[ 176], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 192], 00:34:47.467 | 70.00th=[ 196], 80.00th=[ 204], 90.00th=[ 221], 95.00th=[ 237], 00:34:47.467 | 99.00th=[ 281], 99.50th=[ 297], 99.90th=[ 429], 99.95th=[ 506], 00:34:47.467 | 99.99th=[ 750] 00:34:47.467 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:34:47.467 slat (usec): min=9, max=303, avg=11.22, stdev= 6.66 00:34:47.467 clat (usec): min=36, max=1382, avg=139.75, stdev=33.93 00:34:47.467 lat (usec): min=77, max=1393, avg=150.97, stdev=33.98 00:34:47.467 clat percentiles (usec): 00:34:47.467 | 1.00th=[ 83], 5.00th=[ 92], 10.00th=[ 103], 20.00th=[ 126], 00:34:47.467 | 30.00th=[ 133], 40.00th=[ 137], 50.00th=[ 141], 60.00th=[ 145], 00:34:47.467 | 70.00th=[ 149], 80.00th=[ 155], 90.00th=[ 165], 95.00th=[ 180], 00:34:47.467 | 99.00th=[ 219], 99.50th=[ 231], 99.90th=[ 285], 99.95th=[ 314], 00:34:47.467 | 99.99th=[ 1385] 00:34:47.467 bw ( KiB/s): min=12288, max=12288, per=22.43%, avg=12288.00, stdev= 0.00, samples=1 00:34:47.467 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:34:47.467 lat (usec) : 50=0.02%, 100=4.63%, 250=93.75%, 500=1.56%, 750=0.02% 00:34:47.467 lat (usec) : 1000=0.02% 00:34:47.467 lat (msec) : 2=0.02% 00:34:47.467 cpu : usr=0.60%, sys=4.50%, ctx=5853, majf=0, minf=9 00:34:47.467 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:47.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:47.467 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:47.467 issued rwts: total=2780,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:47.467 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:47.467 job1: (groupid=0, jobs=1): err= 0: pid=105422: Mon Nov 25 12:33:43 2024 00:34:47.467 read: IOPS=3476, BW=13.6MiB/s (14.2MB/s)(13.6MiB/1001msec) 00:34:47.467 slat (nsec): min=6766, max=43872, avg=8398.21, stdev=3185.60 00:34:47.467 clat (usec): min=108, max=365, avg=148.66, stdev=16.95 00:34:47.467 lat (usec): min=115, max=373, avg=157.06, stdev=18.24 00:34:47.467 clat percentiles (usec): 00:34:47.467 | 1.00th=[ 119], 5.00th=[ 128], 10.00th=[ 133], 20.00th=[ 137], 00:34:47.467 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 147], 60.00th=[ 149], 00:34:47.467 | 70.00th=[ 155], 80.00th=[ 159], 90.00th=[ 169], 95.00th=[ 180], 00:34:47.467 | 99.00th=[ 198], 99.50th=[ 212], 99.90th=[ 281], 99.95th=[ 334], 00:34:47.467 | 99.99th=[ 367] 00:34:47.467 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:34:47.467 slat (nsec): min=10035, max=97641, avg=12848.57, stdev=5214.59 00:34:47.467 clat (usec): min=76, max=371, avg=112.28, stdev=16.36 00:34:47.467 lat (usec): min=87, max=383, avg=125.13, stdev=18.53 00:34:47.467 clat percentiles (usec): 00:34:47.467 | 1.00th=[ 89], 5.00th=[ 94], 10.00th=[ 97], 20.00th=[ 101], 00:34:47.467 | 30.00th=[ 104], 40.00th=[ 106], 50.00th=[ 110], 60.00th=[ 114], 00:34:47.467 | 70.00th=[ 117], 80.00th=[ 122], 90.00th=[ 131], 95.00th=[ 141], 00:34:47.467 | 99.00th=[ 165], 99.50th=[ 176], 99.90th=[ 239], 99.95th=[ 314], 00:34:47.467 | 99.99th=[ 371] 00:34:47.467 bw ( KiB/s): min=15497, max=15497, per=28.29%, avg=15497.00, stdev= 0.00, samples=1 00:34:47.467 iops : min= 3874, max= 3874, avg=3874.00, stdev= 0.00, samples=1 00:34:47.467 lat (usec) : 100=9.43%, 250=90.44%, 500=0.13% 00:34:47.467 cpu : usr=1.10%, sys=4.70%, ctx=7065, majf=0, minf=7 00:34:47.468 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:47.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:47.468 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:47.468 issued rwts: total=3480,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:47.468 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:47.468 job2: (groupid=0, jobs=1): err= 0: pid=105423: Mon Nov 25 12:33:43 2024 00:34:47.468 read: IOPS=3601, BW=14.1MiB/s (14.8MB/s)(14.1MiB/1001msec) 00:34:47.468 slat (nsec): min=5614, max=25691, avg=6947.32, stdev=1221.18 00:34:47.468 clat (usec): min=103, max=1460, avg=136.70, stdev=29.99 00:34:47.468 lat (usec): min=110, max=1467, avg=143.64, stdev=30.03 00:34:47.468 clat percentiles (usec): 00:34:47.468 | 1.00th=[ 109], 5.00th=[ 115], 10.00th=[ 118], 20.00th=[ 123], 00:34:47.468 | 30.00th=[ 126], 40.00th=[ 130], 50.00th=[ 133], 60.00th=[ 137], 00:34:47.468 | 70.00th=[ 141], 80.00th=[ 147], 90.00th=[ 159], 95.00th=[ 174], 00:34:47.468 | 99.00th=[ 210], 99.50th=[ 225], 99.90th=[ 318], 99.95th=[ 416], 00:34:47.468 | 99.99th=[ 1467] 00:34:47.468 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:34:47.468 slat (nsec): min=7216, max=54711, avg=10949.27, stdev=3483.95 00:34:47.468 clat (usec): min=74, max=330, avg=105.34, stdev=18.17 00:34:47.468 lat (usec): min=86, max=341, avg=116.29, stdev=18.51 00:34:47.468 clat percentiles (usec): 00:34:47.468 | 1.00th=[ 82], 5.00th=[ 86], 10.00th=[ 89], 20.00th=[ 93], 00:34:47.468 | 30.00th=[ 96], 40.00th=[ 98], 50.00th=[ 101], 60.00th=[ 104], 00:34:47.468 | 70.00th=[ 110], 80.00th=[ 116], 90.00th=[ 128], 95.00th=[ 143], 00:34:47.468 | 99.00th=[ 172], 99.50th=[ 182], 99.90th=[ 212], 99.95th=[ 215], 00:34:47.468 | 99.99th=[ 330] 00:34:47.468 bw ( KiB/s): min=16351, max=16351, per=29.85%, avg=16351.00, stdev= 0.00, samples=1 00:34:47.468 iops : min= 4087, max= 4087, avg=4087.00, stdev= 0.00, samples=1 00:34:47.468 lat (usec) : 100=24.91%, 250=74.95%, 500=0.13% 00:34:47.468 lat (msec) : 2=0.01% 00:34:47.468 cpu : usr=0.90%, sys=5.60%, ctx=7702, majf=0, minf=16 00:34:47.468 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:47.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:47.468 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:47.468 issued rwts: total=3605,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:47.468 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:47.468 job3: (groupid=0, jobs=1): err= 0: pid=105424: Mon Nov 25 12:33:43 2024 00:34:47.468 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:34:47.468 slat (nsec): min=6585, max=96844, avg=9392.52, stdev=5997.87 00:34:47.468 clat (usec): min=108, max=4069, avg=191.46, stdev=80.82 00:34:47.468 lat (usec): min=115, max=4080, avg=200.85, stdev=80.79 00:34:47.468 clat percentiles (usec): 00:34:47.468 | 1.00th=[ 135], 5.00th=[ 153], 10.00th=[ 163], 20.00th=[ 174], 00:34:47.468 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 192], 00:34:47.468 | 70.00th=[ 198], 80.00th=[ 206], 90.00th=[ 219], 95.00th=[ 235], 00:34:47.468 | 99.00th=[ 273], 99.50th=[ 289], 99.90th=[ 383], 99.95th=[ 408], 00:34:47.468 | 99.99th=[ 4080] 00:34:47.468 write: IOPS=2953, BW=11.5MiB/s (12.1MB/s)(11.5MiB/1001msec); 0 zone resets 00:34:47.468 slat (nsec): min=9701, max=80348, avg=12140.34, stdev=4278.63 00:34:47.468 clat (usec): min=70, max=7220, avg=150.47, stdev=193.15 00:34:47.468 lat (usec): min=84, max=7230, avg=162.61, stdev=193.10 00:34:47.468 clat percentiles (usec): 00:34:47.468 | 1.00th=[ 104], 5.00th=[ 118], 10.00th=[ 124], 20.00th=[ 131], 00:34:47.468 | 30.00th=[ 135], 40.00th=[ 139], 50.00th=[ 143], 60.00th=[ 147], 00:34:47.468 | 70.00th=[ 151], 80.00th=[ 157], 90.00th=[ 169], 95.00th=[ 180], 00:34:47.468 | 99.00th=[ 212], 99.50th=[ 229], 99.90th=[ 3425], 99.95th=[ 7046], 00:34:47.468 | 99.99th=[ 7242] 00:34:47.468 bw ( KiB/s): min=12288, max=12288, per=22.43%, avg=12288.00, stdev= 0.00, samples=1 00:34:47.468 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:34:47.468 lat (usec) : 100=0.42%, 250=98.35%, 500=1.14%, 750=0.02% 00:34:47.468 lat (msec) : 4=0.02%, 10=0.05% 00:34:47.468 cpu : usr=1.00%, sys=4.40%, ctx=5518, majf=0, minf=15 00:34:47.468 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:47.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:47.468 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:47.468 issued rwts: total=2560,2956,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:47.468 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:47.468 00:34:47.468 Run status group 0 (all jobs): 00:34:47.468 READ: bw=48.5MiB/s (50.8MB/s), 9.99MiB/s-14.1MiB/s (10.5MB/s-14.8MB/s), io=48.5MiB (50.9MB), run=1001-1001msec 00:34:47.468 WRITE: bw=53.5MiB/s (56.1MB/s), 11.5MiB/s-16.0MiB/s (12.1MB/s-16.8MB/s), io=53.5MiB (56.1MB), run=1001-1001msec 00:34:47.468 00:34:47.468 Disk stats (read/write): 00:34:47.468 nvme0n1: ios=2459/2560, merge=0/0, ticks=466/380, in_queue=846, util=89.58% 00:34:47.468 nvme0n2: ios=3121/3126, merge=0/0, ticks=484/367, in_queue=851, util=90.24% 00:34:47.468 nvme0n3: ios=3304/3584, merge=0/0, ticks=469/384, in_queue=853, util=90.26% 00:34:47.468 nvme0n4: ios=2276/2560, merge=0/0, ticks=454/390, in_queue=844, util=89.52% 00:34:47.468 12:33:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:34:47.468 [global] 00:34:47.468 thread=1 00:34:47.468 invalidate=1 00:34:47.468 rw=write 00:34:47.468 time_based=1 00:34:47.468 runtime=1 00:34:47.468 ioengine=libaio 00:34:47.468 direct=1 00:34:47.468 bs=4096 00:34:47.468 iodepth=128 00:34:47.468 norandommap=0 00:34:47.468 numjobs=1 00:34:47.468 00:34:47.468 verify_dump=1 00:34:47.468 verify_backlog=512 00:34:47.468 verify_state_save=0 00:34:47.468 do_verify=1 00:34:47.468 verify=crc32c-intel 00:34:47.468 [job0] 00:34:47.468 filename=/dev/nvme0n1 00:34:47.468 [job1] 00:34:47.468 filename=/dev/nvme0n2 00:34:47.468 [job2] 00:34:47.468 filename=/dev/nvme0n3 00:34:47.468 [job3] 00:34:47.468 filename=/dev/nvme0n4 00:34:47.468 Could not set queue depth (nvme0n1) 00:34:47.468 Could not set queue depth (nvme0n2) 00:34:47.468 Could not set queue depth (nvme0n3) 00:34:47.468 Could not set queue depth (nvme0n4) 00:34:47.468 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:47.468 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:47.468 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:47.468 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:47.468 fio-3.35 00:34:47.468 Starting 4 threads 00:34:48.840 00:34:48.840 job0: (groupid=0, jobs=1): err= 0: pid=105477: Mon Nov 25 12:33:44 2024 00:34:48.840 read: IOPS=6131, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec) 00:34:48.840 slat (usec): min=2, max=4795, avg=79.96, stdev=402.05 00:34:48.840 clat (usec): min=3638, max=14505, avg=10494.76, stdev=1222.86 00:34:48.840 lat (usec): min=3655, max=14634, avg=10574.73, stdev=1239.17 00:34:48.840 clat percentiles (usec): 00:34:48.840 | 1.00th=[ 7177], 5.00th=[ 8586], 10.00th=[ 9241], 20.00th=[ 9765], 00:34:48.840 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10552], 60.00th=[10683], 00:34:48.840 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11731], 95.00th=[12387], 00:34:48.840 | 99.00th=[13698], 99.50th=[13960], 99.90th=[14353], 99.95th=[14484], 00:34:48.840 | 99.99th=[14484] 00:34:48.840 write: IOPS=6132, BW=24.0MiB/s (25.1MB/s)(24.0MiB/1002msec); 0 zone resets 00:34:48.840 slat (usec): min=7, max=4575, avg=78.69, stdev=439.92 00:34:48.840 clat (usec): min=1146, max=14893, avg=10142.19, stdev=1023.52 00:34:48.840 lat (usec): min=3635, max=14918, avg=10220.88, stdev=1096.53 00:34:48.840 clat percentiles (usec): 00:34:48.840 | 1.00th=[ 7504], 5.00th=[ 8717], 10.00th=[ 9110], 20.00th=[ 9503], 00:34:48.840 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10159], 60.00th=[10290], 00:34:48.840 | 70.00th=[10421], 80.00th=[10552], 90.00th=[10814], 95.00th=[11994], 00:34:48.840 | 99.00th=[13960], 99.50th=[14484], 99.90th=[14615], 99.95th=[14746], 00:34:48.840 | 99.99th=[14877] 00:34:48.840 bw ( KiB/s): min=24576, max=24576, per=30.09%, avg=24576.00, stdev= 0.00, samples=2 00:34:48.840 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:34:48.840 lat (msec) : 2=0.01%, 4=0.17%, 10=33.68%, 20=66.14% 00:34:48.840 cpu : usr=2.20%, sys=10.19%, ctx=565, majf=0, minf=15 00:34:48.840 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:34:48.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.840 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:48.840 issued rwts: total=6144,6145,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:48.840 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:48.840 job1: (groupid=0, jobs=1): err= 0: pid=105478: Mon Nov 25 12:33:44 2024 00:34:48.840 read: IOPS=4694, BW=18.3MiB/s (19.2MB/s)(18.4MiB/1002msec) 00:34:48.840 slat (usec): min=2, max=3857, avg=104.27, stdev=392.85 00:34:48.840 clat (usec): min=661, max=19043, avg=12901.92, stdev=1485.12 00:34:48.840 lat (usec): min=1567, max=19080, avg=13006.20, stdev=1473.03 00:34:48.840 clat percentiles (usec): 00:34:48.840 | 1.00th=[ 7242], 5.00th=[10945], 10.00th=[11338], 20.00th=[12125], 00:34:48.840 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13042], 60.00th=[13173], 00:34:48.840 | 70.00th=[13304], 80.00th=[13698], 90.00th=[14353], 95.00th=[15008], 00:34:48.840 | 99.00th=[16450], 99.50th=[16712], 99.90th=[17957], 99.95th=[18482], 00:34:48.840 | 99.99th=[19006] 00:34:48.840 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:34:48.840 slat (usec): min=6, max=3149, avg=95.05, stdev=285.08 00:34:48.840 clat (usec): min=9422, max=15889, avg=12867.92, stdev=914.18 00:34:48.840 lat (usec): min=9853, max=15905, avg=12962.97, stdev=880.25 00:34:48.840 clat percentiles (usec): 00:34:48.840 | 1.00th=[10290], 5.00th=[11469], 10.00th=[11994], 20.00th=[12256], 00:34:48.840 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12780], 60.00th=[12911], 00:34:48.840 | 70.00th=[13042], 80.00th=[13304], 90.00th=[14091], 95.00th=[14746], 00:34:48.840 | 99.00th=[15401], 99.50th=[15401], 99.90th=[15664], 99.95th=[15664], 00:34:48.840 | 99.99th=[15926] 00:34:48.840 bw ( KiB/s): min=20232, max=20480, per=24.92%, avg=20356.00, stdev=175.36, samples=2 00:34:48.840 iops : min= 5058, max= 5120, avg=5089.00, stdev=43.84, samples=2 00:34:48.840 lat (usec) : 750=0.01% 00:34:48.840 lat (msec) : 2=0.08%, 4=0.17%, 10=0.72%, 20=99.01% 00:34:48.840 cpu : usr=2.30%, sys=8.39%, ctx=891, majf=0, minf=9 00:34:48.840 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:34:48.840 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.840 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:48.840 issued rwts: total=4704,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:48.840 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:48.840 job2: (groupid=0, jobs=1): err= 0: pid=105479: Mon Nov 25 12:33:44 2024 00:34:48.840 read: IOPS=4498, BW=17.6MiB/s (18.4MB/s)(17.6MiB/1003msec) 00:34:48.840 slat (usec): min=4, max=3575, avg=111.94, stdev=420.44 00:34:48.840 clat (usec): min=546, max=19253, avg=13819.24, stdev=1763.86 00:34:48.840 lat (usec): min=2239, max=19467, avg=13931.18, stdev=1753.32 00:34:48.840 clat percentiles (usec): 00:34:48.840 | 1.00th=[ 5932], 5.00th=[11469], 10.00th=[11994], 20.00th=[12780], 00:34:48.840 | 30.00th=[13304], 40.00th=[13566], 50.00th=[13829], 60.00th=[14222], 00:34:48.840 | 70.00th=[14746], 80.00th=[15139], 90.00th=[15533], 95.00th=[16319], 00:34:48.840 | 99.00th=[17433], 99.50th=[17695], 99.90th=[18482], 99.95th=[18482], 00:34:48.840 | 99.99th=[19268] 00:34:48.840 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:34:48.840 slat (usec): min=7, max=3654, avg=103.23, stdev=324.46 00:34:48.840 clat (usec): min=10019, max=17836, avg=13949.35, stdev=1041.01 00:34:48.840 lat (usec): min=10405, max=19577, avg=14052.58, stdev=1007.55 00:34:48.840 clat percentiles (usec): 00:34:48.840 | 1.00th=[11338], 5.00th=[12518], 10.00th=[12911], 20.00th=[13304], 00:34:48.840 | 30.00th=[13435], 40.00th=[13566], 50.00th=[13829], 60.00th=[14091], 00:34:48.840 | 70.00th=[14353], 80.00th=[14615], 90.00th=[15139], 95.00th=[15926], 00:34:48.840 | 99.00th=[17171], 99.50th=[17433], 99.90th=[17957], 99.95th=[17957], 00:34:48.840 | 99.99th=[17957] 00:34:48.841 bw ( KiB/s): min=17216, max=19648, per=22.57%, avg=18432.00, stdev=1719.68, samples=2 00:34:48.841 iops : min= 4304, max= 4912, avg=4608.00, stdev=429.92, samples=2 00:34:48.841 lat (usec) : 750=0.01% 00:34:48.841 lat (msec) : 4=0.16%, 10=0.70%, 20=99.12% 00:34:48.841 cpu : usr=1.60%, sys=8.28%, ctx=764, majf=0, minf=11 00:34:48.841 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:34:48.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.841 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:48.841 issued rwts: total=4512,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:48.841 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:48.841 job3: (groupid=0, jobs=1): err= 0: pid=105480: Mon Nov 25 12:33:44 2024 00:34:48.841 read: IOPS=4573, BW=17.9MiB/s (18.7MB/s)(17.9MiB/1002msec) 00:34:48.841 slat (usec): min=6, max=3550, avg=110.06, stdev=435.72 00:34:48.841 clat (usec): min=1204, max=18210, avg=13812.28, stdev=1613.20 00:34:48.841 lat (usec): min=2824, max=18221, avg=13922.34, stdev=1606.74 00:34:48.841 clat percentiles (usec): 00:34:48.841 | 1.00th=[ 6456], 5.00th=[11731], 10.00th=[12125], 20.00th=[12911], 00:34:48.841 | 30.00th=[13304], 40.00th=[13698], 50.00th=[13829], 60.00th=[14091], 00:34:48.841 | 70.00th=[14484], 80.00th=[15008], 90.00th=[15533], 95.00th=[15926], 00:34:48.841 | 99.00th=[17171], 99.50th=[17433], 99.90th=[18220], 99.95th=[18220], 00:34:48.841 | 99.99th=[18220] 00:34:48.841 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:34:48.841 slat (usec): min=8, max=3608, avg=102.78, stdev=424.63 00:34:48.841 clat (usec): min=9570, max=17752, avg=13707.52, stdev=1003.22 00:34:48.841 lat (usec): min=10255, max=17765, avg=13810.30, stdev=930.27 00:34:48.841 clat percentiles (usec): 00:34:48.841 | 1.00th=[10683], 5.00th=[12125], 10.00th=[12649], 20.00th=[13042], 00:34:48.841 | 30.00th=[13304], 40.00th=[13435], 50.00th=[13698], 60.00th=[13829], 00:34:48.841 | 70.00th=[14091], 80.00th=[14484], 90.00th=[14877], 95.00th=[15401], 00:34:48.841 | 99.00th=[16319], 99.50th=[17171], 99.90th=[17695], 99.95th=[17695], 00:34:48.841 | 99.99th=[17695] 00:34:48.841 bw ( KiB/s): min=17152, max=19712, per=22.57%, avg=18432.00, stdev=1810.19, samples=2 00:34:48.841 iops : min= 4288, max= 4928, avg=4608.00, stdev=452.55, samples=2 00:34:48.841 lat (msec) : 2=0.01%, 4=0.16%, 10=0.76%, 20=99.06% 00:34:48.841 cpu : usr=1.80%, sys=8.59%, ctx=569, majf=0, minf=16 00:34:48.841 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:34:48.841 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.841 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:48.841 issued rwts: total=4583,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:48.841 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:48.841 00:34:48.841 Run status group 0 (all jobs): 00:34:48.841 READ: bw=77.7MiB/s (81.4MB/s), 17.6MiB/s-24.0MiB/s (18.4MB/s-25.1MB/s), io=77.9MiB (81.7MB), run=1002-1003msec 00:34:48.841 WRITE: bw=79.8MiB/s (83.6MB/s), 17.9MiB/s-24.0MiB/s (18.8MB/s-25.1MB/s), io=80.0MiB (83.9MB), run=1002-1003msec 00:34:48.841 00:34:48.841 Disk stats (read/write): 00:34:48.841 nvme0n1: ios=5170/5632, merge=0/0, ticks=26351/25187, in_queue=51538, util=89.18% 00:34:48.841 nvme0n2: ios=4145/4503, merge=0/0, ticks=13086/13594, in_queue=26680, util=89.94% 00:34:48.841 nvme0n3: ios=3920/4096, merge=0/0, ticks=13813/13369, in_queue=27182, util=90.43% 00:34:48.841 nvme0n4: ios=3977/4096, merge=0/0, ticks=13776/12931, in_queue=26707, util=90.09% 00:34:48.841 12:33:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:34:48.841 [global] 00:34:48.841 thread=1 00:34:48.841 invalidate=1 00:34:48.841 rw=randwrite 00:34:48.841 time_based=1 00:34:48.841 runtime=1 00:34:48.841 ioengine=libaio 00:34:48.841 direct=1 00:34:48.841 bs=4096 00:34:48.841 iodepth=128 00:34:48.841 norandommap=0 00:34:48.841 numjobs=1 00:34:48.841 00:34:48.841 verify_dump=1 00:34:48.841 verify_backlog=512 00:34:48.841 verify_state_save=0 00:34:48.841 do_verify=1 00:34:48.841 verify=crc32c-intel 00:34:48.841 [job0] 00:34:48.841 filename=/dev/nvme0n1 00:34:48.841 [job1] 00:34:48.841 filename=/dev/nvme0n2 00:34:48.841 [job2] 00:34:48.841 filename=/dev/nvme0n3 00:34:48.841 [job3] 00:34:48.841 filename=/dev/nvme0n4 00:34:48.841 Could not set queue depth (nvme0n1) 00:34:48.841 Could not set queue depth (nvme0n2) 00:34:48.841 Could not set queue depth (nvme0n3) 00:34:48.841 Could not set queue depth (nvme0n4) 00:34:48.841 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:48.841 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:48.841 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:48.841 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:48.841 fio-3.35 00:34:48.841 Starting 4 threads 00:34:50.211 00:34:50.211 job0: (groupid=0, jobs=1): err= 0: pid=105542: Mon Nov 25 12:33:45 2024 00:34:50.211 read: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec) 00:34:50.211 slat (usec): min=4, max=6321, avg=154.92, stdev=663.60 00:34:50.211 clat (usec): min=13267, max=36134, avg=19572.37, stdev=2465.22 00:34:50.211 lat (usec): min=13278, max=36140, avg=19727.29, stdev=2430.33 00:34:50.211 clat percentiles (usec): 00:34:50.211 | 1.00th=[14091], 5.00th=[15795], 10.00th=[16450], 20.00th=[17433], 00:34:50.211 | 30.00th=[18482], 40.00th=[19006], 50.00th=[19792], 60.00th=[20317], 00:34:50.211 | 70.00th=[20579], 80.00th=[21365], 90.00th=[22152], 95.00th=[22938], 00:34:50.211 | 99.00th=[26608], 99.50th=[32113], 99.90th=[35914], 99.95th=[35914], 00:34:50.211 | 99.99th=[35914] 00:34:50.211 write: IOPS=3491, BW=13.6MiB/s (14.3MB/s)(13.7MiB/1005msec); 0 zone resets 00:34:50.211 slat (usec): min=6, max=6653, avg=145.70, stdev=513.30 00:34:50.211 clat (usec): min=1811, max=41675, avg=19003.45, stdev=4591.91 00:34:50.211 lat (usec): min=4051, max=41690, avg=19149.15, stdev=4609.32 00:34:50.211 clat percentiles (usec): 00:34:50.211 | 1.00th=[ 7308], 5.00th=[15008], 10.00th=[15664], 20.00th=[16581], 00:34:50.211 | 30.00th=[17433], 40.00th=[17957], 50.00th=[18220], 60.00th=[18482], 00:34:50.211 | 70.00th=[19006], 80.00th=[20055], 90.00th=[22676], 95.00th=[29230], 00:34:50.211 | 99.00th=[38536], 99.50th=[39060], 99.90th=[39584], 99.95th=[41681], 00:34:50.211 | 99.99th=[41681] 00:34:50.211 bw ( KiB/s): min=13056, max=13992, per=16.87%, avg=13524.00, stdev=661.85, samples=2 00:34:50.211 iops : min= 3264, max= 3498, avg=3381.00, stdev=165.46, samples=2 00:34:50.211 lat (msec) : 2=0.02%, 10=0.73%, 20=67.89%, 50=31.36% 00:34:50.211 cpu : usr=1.10%, sys=6.47%, ctx=838, majf=0, minf=15 00:34:50.211 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:34:50.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:50.211 issued rwts: total=3072,3509,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:50.211 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:50.211 job1: (groupid=0, jobs=1): err= 0: pid=105543: Mon Nov 25 12:33:45 2024 00:34:50.211 read: IOPS=6819, BW=26.6MiB/s (27.9MB/s)(26.7MiB/1003msec) 00:34:50.211 slat (usec): min=5, max=4120, avg=71.05, stdev=347.46 00:34:50.211 clat (usec): min=1027, max=12843, avg=9266.09, stdev=1052.29 00:34:50.211 lat (usec): min=3080, max=12896, avg=9337.14, stdev=1072.31 00:34:50.211 clat percentiles (usec): 00:34:50.211 | 1.00th=[ 6652], 5.00th=[ 7635], 10.00th=[ 8225], 20.00th=[ 8717], 00:34:50.211 | 30.00th=[ 8979], 40.00th=[ 9110], 50.00th=[ 9241], 60.00th=[ 9372], 00:34:50.211 | 70.00th=[ 9503], 80.00th=[ 9896], 90.00th=[10421], 95.00th=[11076], 00:34:50.211 | 99.00th=[12125], 99.50th=[12387], 99.90th=[12780], 99.95th=[12780], 00:34:50.211 | 99.99th=[12780] 00:34:50.211 write: IOPS=7146, BW=27.9MiB/s (29.3MB/s)(28.0MiB/1003msec); 0 zone resets 00:34:50.211 slat (usec): min=6, max=4059, avg=67.86, stdev=361.64 00:34:50.211 clat (usec): min=4737, max=12966, avg=8869.17, stdev=883.96 00:34:50.211 lat (usec): min=4756, max=12995, avg=8937.03, stdev=940.63 00:34:50.211 clat percentiles (usec): 00:34:50.211 | 1.00th=[ 6718], 5.00th=[ 7635], 10.00th=[ 8029], 20.00th=[ 8455], 00:34:50.211 | 30.00th=[ 8586], 40.00th=[ 8717], 50.00th=[ 8717], 60.00th=[ 8848], 00:34:50.211 | 70.00th=[ 8979], 80.00th=[ 9241], 90.00th=[ 9634], 95.00th=[10552], 00:34:50.211 | 99.00th=[12256], 99.50th=[12518], 99.90th=[12911], 99.95th=[12911], 00:34:50.211 | 99.99th=[12911] 00:34:50.211 bw ( KiB/s): min=28672, max=28672, per=35.77%, avg=28672.00, stdev= 0.00, samples=2 00:34:50.211 iops : min= 7168, max= 7168, avg=7168.00, stdev= 0.00, samples=2 00:34:50.211 lat (msec) : 2=0.01%, 4=0.24%, 10=87.10%, 20=12.66% 00:34:50.211 cpu : usr=3.09%, sys=10.98%, ctx=673, majf=0, minf=13 00:34:50.211 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:34:50.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:50.211 issued rwts: total=6840,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:50.211 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:50.211 job2: (groupid=0, jobs=1): err= 0: pid=105544: Mon Nov 25 12:33:45 2024 00:34:50.211 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:34:50.211 slat (usec): min=7, max=3452, avg=86.87, stdev=324.91 00:34:50.211 clat (usec): min=7985, max=14941, avg=11136.45, stdev=1145.55 00:34:50.211 lat (usec): min=8203, max=14954, avg=11223.32, stdev=1136.45 00:34:50.211 clat percentiles (usec): 00:34:50.211 | 1.00th=[ 8717], 5.00th=[ 9241], 10.00th=[ 9634], 20.00th=[10028], 00:34:50.211 | 30.00th=[10421], 40.00th=[10683], 50.00th=[11076], 60.00th=[11731], 00:34:50.211 | 70.00th=[12125], 80.00th=[12256], 90.00th=[12518], 95.00th=[12649], 00:34:50.211 | 99.00th=[13698], 99.50th=[13698], 99.90th=[14222], 99.95th=[14353], 00:34:50.211 | 99.99th=[14877] 00:34:50.211 write: IOPS=5862, BW=22.9MiB/s (24.0MB/s)(23.0MiB/1003msec); 0 zone resets 00:34:50.211 slat (usec): min=6, max=2796, avg=82.71, stdev=354.91 00:34:50.211 clat (usec): min=1969, max=14416, avg=10881.43, stdev=1279.31 00:34:50.211 lat (usec): min=2470, max=14429, avg=10964.14, stdev=1241.12 00:34:50.211 clat percentiles (usec): 00:34:50.211 | 1.00th=[ 6783], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[10028], 00:34:50.211 | 30.00th=[10159], 40.00th=[10552], 50.00th=[11076], 60.00th=[11469], 00:34:50.211 | 70.00th=[11731], 80.00th=[11863], 90.00th=[12125], 95.00th=[12256], 00:34:50.211 | 99.00th=[13829], 99.50th=[13960], 99.90th=[14353], 99.95th=[14353], 00:34:50.211 | 99.99th=[14353] 00:34:50.211 bw ( KiB/s): min=21152, max=24872, per=28.71%, avg=23012.00, stdev=2630.44, samples=2 00:34:50.211 iops : min= 5288, max= 6218, avg=5753.00, stdev=657.61, samples=2 00:34:50.211 lat (msec) : 2=0.01%, 4=0.21%, 10=19.11%, 20=80.67% 00:34:50.211 cpu : usr=2.89%, sys=8.78%, ctx=675, majf=0, minf=7 00:34:50.211 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:34:50.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:50.211 issued rwts: total=5632,5880,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:50.211 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:50.211 job3: (groupid=0, jobs=1): err= 0: pid=105545: Mon Nov 25 12:33:45 2024 00:34:50.211 read: IOPS=3416, BW=13.3MiB/s (14.0MB/s)(13.4MiB/1004msec) 00:34:50.211 slat (usec): min=3, max=6926, avg=148.78, stdev=687.35 00:34:50.211 clat (usec): min=1887, max=24995, avg=18575.74, stdev=2924.34 00:34:50.211 lat (usec): min=4132, max=25010, avg=18724.53, stdev=2881.17 00:34:50.211 clat percentiles (usec): 00:34:50.211 | 1.00th=[ 9765], 5.00th=[14091], 10.00th=[15139], 20.00th=[16581], 00:34:50.211 | 30.00th=[17171], 40.00th=[17957], 50.00th=[18744], 60.00th=[19792], 00:34:50.211 | 70.00th=[20579], 80.00th=[21103], 90.00th=[21890], 95.00th=[22414], 00:34:50.211 | 99.00th=[23725], 99.50th=[25035], 99.90th=[25035], 99.95th=[25035], 00:34:50.211 | 99.99th=[25035] 00:34:50.211 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:34:50.211 slat (usec): min=3, max=6090, avg=132.88, stdev=476.59 00:34:50.211 clat (usec): min=8488, max=24432, avg=17504.29, stdev=2594.30 00:34:50.211 lat (usec): min=11138, max=24443, avg=17637.18, stdev=2580.29 00:34:50.211 clat percentiles (usec): 00:34:50.211 | 1.00th=[11207], 5.00th=[11600], 10.00th=[12780], 20.00th=[16319], 00:34:50.211 | 30.00th=[17433], 40.00th=[17695], 50.00th=[17957], 60.00th=[18482], 00:34:50.211 | 70.00th=[18744], 80.00th=[19268], 90.00th=[20317], 95.00th=[20841], 00:34:50.211 | 99.00th=[21890], 99.50th=[22938], 99.90th=[24511], 99.95th=[24511], 00:34:50.211 | 99.99th=[24511] 00:34:50.211 bw ( KiB/s): min=12856, max=15816, per=17.88%, avg=14336.00, stdev=2093.04, samples=2 00:34:50.211 iops : min= 3214, max= 3954, avg=3584.00, stdev=523.26, samples=2 00:34:50.211 lat (msec) : 2=0.01%, 10=0.73%, 20=74.39%, 50=24.86% 00:34:50.211 cpu : usr=0.90%, sys=6.68%, ctx=807, majf=0, minf=13 00:34:50.211 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:34:50.211 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:50.211 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:50.211 issued rwts: total=3430,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:50.211 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:50.211 00:34:50.211 Run status group 0 (all jobs): 00:34:50.211 READ: bw=73.7MiB/s (77.3MB/s), 11.9MiB/s-26.6MiB/s (12.5MB/s-27.9MB/s), io=74.1MiB (77.7MB), run=1003-1005msec 00:34:50.211 WRITE: bw=78.3MiB/s (82.1MB/s), 13.6MiB/s-27.9MiB/s (14.3MB/s-29.3MB/s), io=78.7MiB (82.5MB), run=1003-1005msec 00:34:50.211 00:34:50.211 Disk stats (read/write): 00:34:50.211 nvme0n1: ios=2747/3072, merge=0/0, ticks=12978/14397, in_queue=27375, util=89.78% 00:34:50.211 nvme0n2: ios=6193/6184, merge=0/0, ticks=27770/24394, in_queue=52164, util=90.95% 00:34:50.211 nvme0n3: ios=4939/5120, merge=0/0, ticks=13701/13021, in_queue=26722, util=90.06% 00:34:50.211 nvme0n4: ios=3118/3151, merge=0/0, ticks=14159/12907, in_queue=27066, util=90.55% 00:34:50.211 12:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:34:50.211 12:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=105558 00:34:50.211 12:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:34:50.211 12:33:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:34:50.211 [global] 00:34:50.211 thread=1 00:34:50.211 invalidate=1 00:34:50.211 rw=read 00:34:50.212 time_based=1 00:34:50.212 runtime=10 00:34:50.212 ioengine=libaio 00:34:50.212 direct=1 00:34:50.212 bs=4096 00:34:50.212 iodepth=1 00:34:50.212 norandommap=1 00:34:50.212 numjobs=1 00:34:50.212 00:34:50.212 [job0] 00:34:50.212 filename=/dev/nvme0n1 00:34:50.212 [job1] 00:34:50.212 filename=/dev/nvme0n2 00:34:50.212 [job2] 00:34:50.212 filename=/dev/nvme0n3 00:34:50.212 [job3] 00:34:50.212 filename=/dev/nvme0n4 00:34:50.212 Could not set queue depth (nvme0n1) 00:34:50.212 Could not set queue depth (nvme0n2) 00:34:50.212 Could not set queue depth (nvme0n3) 00:34:50.212 Could not set queue depth (nvme0n4) 00:34:50.212 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:50.212 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:50.212 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:50.212 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:50.212 fio-3.35 00:34:50.212 Starting 4 threads 00:34:53.521 12:33:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:34:53.521 fio: pid=105601, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:53.521 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=80875520, buflen=4096 00:34:53.521 12:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:34:53.521 fio: pid=105600, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:53.521 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=77639680, buflen=4096 00:34:53.521 12:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:53.521 12:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:34:53.521 fio: pid=105598, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:53.521 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=64544768, buflen=4096 00:34:53.780 12:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:53.780 12:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:34:53.780 fio: pid=105599, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:53.780 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=7364608, buflen=4096 00:34:53.780 00:34:53.780 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=105598: Mon Nov 25 12:33:49 2024 00:34:53.780 read: IOPS=4740, BW=18.5MiB/s (19.4MB/s)(61.6MiB/3324msec) 00:34:53.780 slat (usec): min=4, max=14778, avg= 9.84, stdev=173.41 00:34:53.780 clat (usec): min=102, max=7387, avg=200.50, stdev=95.98 00:34:53.780 lat (usec): min=111, max=14974, avg=210.34, stdev=198.37 00:34:53.780 clat percentiles (usec): 00:34:53.780 | 1.00th=[ 121], 5.00th=[ 137], 10.00th=[ 147], 20.00th=[ 176], 00:34:53.780 | 30.00th=[ 188], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 204], 00:34:53.780 | 70.00th=[ 210], 80.00th=[ 221], 90.00th=[ 249], 95.00th=[ 265], 00:34:53.780 | 99.00th=[ 289], 99.50th=[ 306], 99.90th=[ 486], 99.95th=[ 799], 00:34:53.780 | 99.99th=[ 7177] 00:34:53.780 bw ( KiB/s): min=18080, max=19056, per=22.96%, avg=18822.67, stdev=368.01, samples=6 00:34:53.780 iops : min= 4520, max= 4764, avg=4705.67, stdev=92.00, samples=6 00:34:53.780 lat (usec) : 250=90.61%, 500=9.28%, 750=0.04%, 1000=0.01% 00:34:53.780 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01% 00:34:53.780 cpu : usr=0.45%, sys=2.92%, ctx=15766, majf=0, minf=1 00:34:53.780 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:53.780 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.780 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.780 issued rwts: total=15759,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:53.780 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:53.780 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=105599: Mon Nov 25 12:33:49 2024 00:34:53.780 read: IOPS=5130, BW=20.0MiB/s (21.0MB/s)(71.0MiB/3544msec) 00:34:53.780 slat (usec): min=3, max=12893, avg=10.79, stdev=180.75 00:34:53.780 clat (usec): min=95, max=2874, avg=183.55, stdev=50.61 00:34:53.780 lat (usec): min=104, max=13085, avg=194.34, stdev=187.65 00:34:53.780 clat percentiles (usec): 00:34:53.780 | 1.00th=[ 110], 5.00th=[ 118], 10.00th=[ 124], 20.00th=[ 135], 00:34:53.780 | 30.00th=[ 151], 40.00th=[ 180], 50.00th=[ 192], 60.00th=[ 198], 00:34:53.780 | 70.00th=[ 204], 80.00th=[ 215], 90.00th=[ 241], 95.00th=[ 262], 00:34:53.780 | 99.00th=[ 289], 99.50th=[ 302], 99.90th=[ 420], 99.95th=[ 469], 00:34:53.780 | 99.99th=[ 1745] 00:34:53.780 bw ( KiB/s): min=18888, max=25008, per=24.38%, avg=19992.00, stdev=2458.81, samples=6 00:34:53.780 iops : min= 4722, max= 6252, avg=4998.00, stdev=614.70, samples=6 00:34:53.780 lat (usec) : 100=0.02%, 250=92.17%, 500=7.77%, 750=0.02%, 1000=0.01% 00:34:53.780 lat (msec) : 2=0.01%, 4=0.01% 00:34:53.780 cpu : usr=0.62%, sys=3.10%, ctx=18194, majf=0, minf=2 00:34:53.780 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:53.781 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.781 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.781 issued rwts: total=18183,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:53.781 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:53.781 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=105600: Mon Nov 25 12:33:49 2024 00:34:53.781 read: IOPS=6075, BW=23.7MiB/s (24.9MB/s)(74.0MiB/3120msec) 00:34:53.781 slat (usec): min=4, max=7755, avg= 7.56, stdev=72.53 00:34:53.781 clat (usec): min=100, max=1566, avg=156.44, stdev=34.81 00:34:53.781 lat (usec): min=105, max=7915, avg=164.00, stdev=81.04 00:34:53.781 clat percentiles (usec): 00:34:53.781 | 1.00th=[ 124], 5.00th=[ 130], 10.00th=[ 133], 20.00th=[ 137], 00:34:53.781 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 147], 60.00th=[ 151], 00:34:53.781 | 70.00th=[ 157], 80.00th=[ 174], 90.00th=[ 200], 95.00th=[ 217], 00:34:53.781 | 99.00th=[ 247], 99.50th=[ 269], 99.90th=[ 343], 99.95th=[ 652], 00:34:53.781 | 99.99th=[ 1385] 00:34:53.781 bw ( KiB/s): min=20696, max=25376, per=29.61%, avg=24277.33, stdev=1793.90, samples=6 00:34:53.781 iops : min= 5174, max= 6344, avg=6069.33, stdev=448.48, samples=6 00:34:53.781 lat (usec) : 250=99.15%, 500=0.79%, 750=0.03%, 1000=0.01% 00:34:53.781 lat (msec) : 2=0.02% 00:34:53.781 cpu : usr=0.71%, sys=3.49%, ctx=18958, majf=0, minf=2 00:34:53.781 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:53.781 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.781 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.781 issued rwts: total=18956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:53.781 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:53.781 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=105601: Mon Nov 25 12:33:49 2024 00:34:53.781 read: IOPS=6755, BW=26.4MiB/s (27.7MB/s)(77.1MiB/2923msec) 00:34:53.781 slat (nsec): min=4406, max=78218, avg=5724.78, stdev=1899.39 00:34:53.781 clat (usec): min=102, max=1539, avg=141.91, stdev=26.27 00:34:53.781 lat (usec): min=107, max=1546, avg=147.63, stdev=26.43 00:34:53.781 clat percentiles (usec): 00:34:53.781 | 1.00th=[ 116], 5.00th=[ 121], 10.00th=[ 123], 20.00th=[ 127], 00:34:53.781 | 30.00th=[ 130], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 139], 00:34:53.781 | 70.00th=[ 143], 80.00th=[ 151], 90.00th=[ 174], 95.00th=[ 198], 00:34:53.781 | 99.00th=[ 229], 99.50th=[ 239], 99.90th=[ 285], 99.95th=[ 326], 00:34:53.781 | 99.99th=[ 594] 00:34:53.781 bw ( KiB/s): min=26640, max=27616, per=33.02%, avg=27075.20, stdev=424.02, samples=5 00:34:53.781 iops : min= 6660, max= 6904, avg=6768.80, stdev=106.01, samples=5 00:34:53.781 lat (usec) : 250=99.78%, 500=0.21%, 750=0.01% 00:34:53.781 lat (msec) : 2=0.01% 00:34:53.781 cpu : usr=0.48%, sys=3.22%, ctx=19746, majf=0, minf=1 00:34:53.781 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:53.781 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.781 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.781 issued rwts: total=19746,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:53.781 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:53.781 00:34:53.781 Run status group 0 (all jobs): 00:34:53.781 READ: bw=80.1MiB/s (84.0MB/s), 18.5MiB/s-26.4MiB/s (19.4MB/s-27.7MB/s), io=284MiB (298MB), run=2923-3544msec 00:34:53.781 00:34:53.781 Disk stats (read/write): 00:34:53.781 nvme0n1: ios=14694/0, merge=0/0, ticks=2939/0, in_queue=2939, util=95.38% 00:34:53.781 nvme0n2: ios=16972/0, merge=0/0, ticks=3150/0, in_queue=3150, util=95.42% 00:34:53.781 nvme0n3: ios=17541/0, merge=0/0, ticks=2778/0, in_queue=2778, util=96.72% 00:34:53.781 nvme0n4: ios=19487/0, merge=0/0, ticks=2787/0, in_queue=2787, util=96.81% 00:34:53.781 12:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:53.781 12:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:34:54.039 12:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:54.039 12:33:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:34:54.297 12:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:54.297 12:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:34:54.555 12:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:54.555 12:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:34:54.812 12:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:54.812 12:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:34:55.069 12:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:34:55.069 12:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 105558 00:34:55.069 12:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:34:55.069 12:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:55.069 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:55.069 12:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:55.069 12:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:34:55.069 12:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:34:55.069 12:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:55.069 12:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:34:55.069 12:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:55.069 nvmf hotplug test: fio failed as expected 00:34:55.069 12:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:34:55.069 12:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:34:55.069 12:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:34:55.069 12:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:55.330 12:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:34:55.330 12:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:34:55.330 12:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:34:55.330 12:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:34:55.330 12:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:34:55.330 12:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:55.330 12:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:34:55.330 12:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:55.330 12:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:34:55.330 12:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:55.330 12:33:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:55.330 rmmod nvme_tcp 00:34:55.330 rmmod nvme_fabrics 00:34:55.330 rmmod nvme_keyring 00:34:55.330 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:55.330 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:34:55.330 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:34:55.330 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 105078 ']' 00:34:55.330 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 105078 00:34:55.330 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 105078 ']' 00:34:55.330 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 105078 00:34:55.330 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:34:55.330 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:55.330 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105078 00:34:55.330 killing process with pid 105078 00:34:55.330 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:55.330 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:55.330 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105078' 00:34:55.330 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 105078 00:34:55.330 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 105078 00:34:55.330 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:55.330 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:55.330 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:55.330 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:34:55.330 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:55.330 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:34:55.330 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:34:55.330 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:55.330 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:34:55.330 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:34:55.591 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:34:55.591 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:34:55.591 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:34:55.591 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:34:55.591 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:34:55.591 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:34:55.591 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:34:55.591 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:34:55.591 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:34:55.591 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:34:55.591 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:55.591 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:55.591 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:34:55.591 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:55.591 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:55.591 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:55.592 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:34:55.592 00:34:55.592 real 0m17.823s 00:34:55.592 user 0m55.775s 00:34:55.592 sys 0m7.338s 00:34:55.592 ************************************ 00:34:55.592 END TEST nvmf_fio_target 00:34:55.592 ************************************ 00:34:55.592 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:55.592 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:55.592 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:55.592 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:55.592 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:55.592 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:55.592 ************************************ 00:34:55.592 START TEST nvmf_bdevio 00:34:55.592 ************************************ 00:34:55.592 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:55.853 * Looking for test storage... 00:34:55.853 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:34:55.853 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:55.853 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:34:55.853 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:55.853 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:55.853 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:55.853 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:55.853 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:55.853 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:34:55.853 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:34:55.853 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:34:55.853 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:34:55.853 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:34:55.853 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:34:55.853 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:34:55.853 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:55.853 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:34:55.853 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:34:55.853 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:55.853 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:55.853 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:34:55.853 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:34:55.853 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:55.853 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:34:55.853 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:34:55.853 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:34:55.853 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:34:55.853 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:55.853 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:34:55.853 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:34:55.853 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:55.853 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:55.853 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:34:55.853 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:55.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:55.854 --rc genhtml_branch_coverage=1 00:34:55.854 --rc genhtml_function_coverage=1 00:34:55.854 --rc genhtml_legend=1 00:34:55.854 --rc geninfo_all_blocks=1 00:34:55.854 --rc geninfo_unexecuted_blocks=1 00:34:55.854 00:34:55.854 ' 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:55.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:55.854 --rc genhtml_branch_coverage=1 00:34:55.854 --rc genhtml_function_coverage=1 00:34:55.854 --rc genhtml_legend=1 00:34:55.854 --rc geninfo_all_blocks=1 00:34:55.854 --rc geninfo_unexecuted_blocks=1 00:34:55.854 00:34:55.854 ' 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:55.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:55.854 --rc genhtml_branch_coverage=1 00:34:55.854 --rc genhtml_function_coverage=1 00:34:55.854 --rc genhtml_legend=1 00:34:55.854 --rc geninfo_all_blocks=1 00:34:55.854 --rc geninfo_unexecuted_blocks=1 00:34:55.854 00:34:55.854 ' 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:55.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:55.854 --rc genhtml_branch_coverage=1 00:34:55.854 --rc genhtml_function_coverage=1 00:34:55.854 --rc genhtml_legend=1 00:34:55.854 --rc geninfo_all_blocks=1 00:34:55.854 --rc geninfo_unexecuted_blocks=1 00:34:55.854 00:34:55.854 ' 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:55.854 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:55.855 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:34:55.855 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:34:55.855 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:34:55.855 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:34:55.855 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:34:55.855 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:34:55.855 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:55.855 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:34:55.855 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:34:55.855 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:34:55.855 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:55.855 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:34:55.855 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:34:55.855 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:34:55.855 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:34:55.855 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:34:55.855 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:34:55.855 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:55.855 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:34:55.855 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:34:55.855 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:34:55.855 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:34:55.855 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:34:55.855 Cannot find device "nvmf_init_br" 00:34:55.855 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:34:55.855 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:34:55.855 Cannot find device "nvmf_init_br2" 00:34:55.855 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:34:55.855 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:34:55.855 Cannot find device "nvmf_tgt_br" 00:34:55.855 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:34:55.855 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:34:55.855 Cannot find device "nvmf_tgt_br2" 00:34:55.855 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:34:55.855 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:34:55.855 Cannot find device "nvmf_init_br" 00:34:55.855 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:34:55.855 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:34:55.855 Cannot find device "nvmf_init_br2" 00:34:55.855 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:34:55.855 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:34:55.855 Cannot find device "nvmf_tgt_br" 00:34:55.855 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:34:55.855 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:34:55.855 Cannot find device "nvmf_tgt_br2" 00:34:55.855 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:34:55.855 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:34:55.855 Cannot find device "nvmf_br" 00:34:55.855 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:34:55.855 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:34:55.855 Cannot find device "nvmf_init_if" 00:34:55.855 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:34:55.855 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:34:55.855 Cannot find device "nvmf_init_if2" 00:34:55.855 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:34:55.855 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:55.855 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:55.855 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:34:55.855 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:55.855 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:55.855 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:34:55.855 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:34:55.855 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:34:56.117 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:34:56.117 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:34:56.117 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:34:56.117 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:34:56.117 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:34:56.117 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:34:56.117 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:34:56.117 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:34:56.117 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:34:56.117 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:34:56.117 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:34:56.117 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:34:56.117 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:34:56.117 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:34:56.117 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:34:56.117 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:34:56.117 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:34:56.117 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:34:56.117 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:34:56.117 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:34:56.117 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:34:56.117 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:34:56.117 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:34:56.117 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:34:56.117 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:34:56.117 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:34:56.117 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:34:56.117 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:34:56.117 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:34:56.117 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:34:56.117 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:34:56.117 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:34:56.118 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:34:56.118 00:34:56.118 --- 10.0.0.3 ping statistics --- 00:34:56.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:56.118 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:34:56.118 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:34:56.118 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:34:56.118 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.033 ms 00:34:56.118 00:34:56.118 --- 10.0.0.4 ping statistics --- 00:34:56.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:56.118 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:34:56.118 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:34:56.118 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:56.118 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:34:56.118 00:34:56.118 --- 10.0.0.1 ping statistics --- 00:34:56.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:56.118 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:34:56.118 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:34:56.118 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:56.118 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.036 ms 00:34:56.118 00:34:56.118 --- 10.0.0.2 ping statistics --- 00:34:56.118 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:56.118 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:34:56.118 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:56.118 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:34:56.118 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:56.118 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:56.118 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:56.118 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:56.118 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:56.118 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:56.118 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:56.118 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:34:56.118 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:56.118 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:56.118 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:56.118 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=105972 00:34:56.118 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 105972 00:34:56.118 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 105972 ']' 00:34:56.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:56.118 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:56.118 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:56.118 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:34:56.118 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:56.118 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:56.118 12:33:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:56.118 [2024-11-25 12:33:51.933128] thread.c:3055:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:56.118 [2024-11-25 12:33:51.933725] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:34:56.118 [2024-11-25 12:33:51.933801] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:56.377 [2024-11-25 12:33:52.074588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:56.377 [2024-11-25 12:33:52.110789] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:56.377 [2024-11-25 12:33:52.110818] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:56.377 [2024-11-25 12:33:52.110824] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:56.377 [2024-11-25 12:33:52.110829] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:56.377 [2024-11-25 12:33:52.110834] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:56.377 [2024-11-25 12:33:52.111503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:56.377 [2024-11-25 12:33:52.111686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:56.377 [2024-11-25 12:33:52.111819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:56.377 [2024-11-25 12:33:52.111822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:56.377 [2024-11-25 12:33:52.165256] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:56.377 [2024-11-25 12:33:52.165328] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:56.377 [2024-11-25 12:33:52.166415] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:56.377 [2024-11-25 12:33:52.166534] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:56.377 [2024-11-25 12:33:52.166621] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:56.945 12:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:56.945 12:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:34:56.945 12:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:56.945 12:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:56.945 12:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:57.205 12:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:57.205 12:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:57.205 12:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.205 12:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:57.205 [2024-11-25 12:33:52.852929] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:57.205 12:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.205 12:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:57.205 12:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.205 12:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:57.205 Malloc0 00:34:57.205 12:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.205 12:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:57.205 12:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.205 12:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:57.205 12:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.205 12:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:57.205 12:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.205 12:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:57.205 12:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.205 12:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:34:57.205 12:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.205 12:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:57.205 [2024-11-25 12:33:52.928959] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:34:57.205 12:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.205 12:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:34:57.205 12:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:34:57.205 12:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:34:57.205 12:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:34:57.205 12:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:57.205 12:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:57.205 { 00:34:57.205 "params": { 00:34:57.205 "name": "Nvme$subsystem", 00:34:57.205 "trtype": "$TEST_TRANSPORT", 00:34:57.205 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:57.205 "adrfam": "ipv4", 00:34:57.205 "trsvcid": "$NVMF_PORT", 00:34:57.205 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:57.205 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:57.205 "hdgst": ${hdgst:-false}, 00:34:57.205 "ddgst": ${ddgst:-false} 00:34:57.205 }, 00:34:57.205 "method": "bdev_nvme_attach_controller" 00:34:57.205 } 00:34:57.205 EOF 00:34:57.205 )") 00:34:57.205 12:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:34:57.205 12:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:34:57.205 12:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:34:57.205 12:33:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:57.205 "params": { 00:34:57.205 "name": "Nvme1", 00:34:57.205 "trtype": "tcp", 00:34:57.205 "traddr": "10.0.0.3", 00:34:57.205 "adrfam": "ipv4", 00:34:57.205 "trsvcid": "4420", 00:34:57.205 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:57.205 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:57.205 "hdgst": false, 00:34:57.205 "ddgst": false 00:34:57.205 }, 00:34:57.205 "method": "bdev_nvme_attach_controller" 00:34:57.205 }' 00:34:57.205 [2024-11-25 12:33:52.973087] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:34:57.205 [2024-11-25 12:33:52.973143] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106025 ] 00:34:57.466 [2024-11-25 12:33:53.111948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:57.466 [2024-11-25 12:33:53.148652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:57.466 [2024-11-25 12:33:53.148849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:57.466 [2024-11-25 12:33:53.148938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:57.466 I/O targets: 00:34:57.466 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:34:57.466 00:34:57.466 00:34:57.466 CUnit - A unit testing framework for C - Version 2.1-3 00:34:57.466 http://cunit.sourceforge.net/ 00:34:57.466 00:34:57.466 00:34:57.466 Suite: bdevio tests on: Nvme1n1 00:34:57.466 Test: blockdev write read block ...passed 00:34:57.744 Test: blockdev write zeroes read block ...passed 00:34:57.744 Test: blockdev write zeroes read no split ...passed 00:34:57.744 Test: blockdev write zeroes read split ...passed 00:34:57.745 Test: blockdev write zeroes read split partial ...passed 00:34:57.745 Test: blockdev reset ...[2024-11-25 12:33:53.433098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:34:57.745 [2024-11-25 12:33:53.433219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1570bc0 (9): Bad file descriptor 00:34:57.745 [2024-11-25 12:33:53.435742] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:34:57.745 passed 00:34:57.745 Test: blockdev write read 8 blocks ...passed 00:34:57.745 Test: blockdev write read size > 128k ...passed 00:34:57.745 Test: blockdev write read invalid size ...passed 00:34:57.745 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:57.745 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:57.745 Test: blockdev write read max offset ...passed 00:34:57.745 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:57.745 Test: blockdev writev readv 8 blocks ...passed 00:34:57.745 Test: blockdev writev readv 30 x 1block ...passed 00:34:57.745 Test: blockdev writev readv block ...passed 00:34:57.745 Test: blockdev writev readv size > 128k ...passed 00:34:57.745 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:57.745 Test: blockdev comparev and writev ...[2024-11-25 12:33:53.610632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:57.745 [2024-11-25 12:33:53.610738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:57.745 [2024-11-25 12:33:53.610788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:57.745 [2024-11-25 12:33:53.610823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:57.745 [2024-11-25 12:33:53.611078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:57.745 [2024-11-25 12:33:53.611131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:57.745 [2024-11-25 12:33:53.611163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:57.745 [2024-11-25 12:33:53.611194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:57.745 [2024-11-25 12:33:53.611510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:57.745 [2024-11-25 12:33:53.611551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:57.745 [2024-11-25 12:33:53.611585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:57.745 [2024-11-25 12:33:53.611608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:57.745 [2024-11-25 12:33:53.611993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:57.745 [2024-11-25 12:33:53.612031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:57.745 [2024-11-25 12:33:53.612073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:57.745 [2024-11-25 12:33:53.612103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:58.006 passed 00:34:58.006 Test: blockdev nvme passthru rw ...passed 00:34:58.006 Test: blockdev nvme passthru vendor specific ...[2024-11-25 12:33:53.695928] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:58.006 [2024-11-25 12:33:53.696007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:58.006 [2024-11-25 12:33:53.696112] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:58.006 [2024-11-25 12:33:53.696148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:58.006 [2024-11-25 12:33:53.696241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:58.006 [2024-11-25 12:33:53.696272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:58.006 [2024-11-25 12:33:53.696360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:58.006 passed 00:34:58.006 Test: blockdev nvme admin passthru ...[2024-11-25 12:33:53.696392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:58.006 passed 00:34:58.006 Test: blockdev copy ...passed 00:34:58.006 00:34:58.006 Run Summary: Type Total Ran Passed Failed Inactive 00:34:58.006 suites 1 1 n/a 0 0 00:34:58.006 tests 23 23 23 0 0 00:34:58.006 asserts 152 152 152 0 n/a 00:34:58.006 00:34:58.006 Elapsed time = 0.945 seconds 00:34:58.006 12:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:58.006 12:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.006 12:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:58.006 12:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.006 12:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:34:58.006 12:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:34:58.006 12:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:58.006 12:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:34:58.267 12:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:58.267 12:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:34:58.267 12:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:58.267 12:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:58.267 rmmod nvme_tcp 00:34:58.267 rmmod nvme_fabrics 00:34:58.267 rmmod nvme_keyring 00:34:58.267 12:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:58.267 12:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:34:58.267 12:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:34:58.267 12:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 105972 ']' 00:34:58.267 12:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 105972 00:34:58.267 12:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 105972 ']' 00:34:58.267 12:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 105972 00:34:58.267 12:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:34:58.267 12:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:58.267 12:33:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105972 00:34:58.267 12:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:34:58.267 killing process with pid 105972 00:34:58.267 12:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:34:58.267 12:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105972' 00:34:58.267 12:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 105972 00:34:58.267 12:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 105972 00:34:58.267 12:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:58.267 12:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:58.267 12:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:58.267 12:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:34:58.267 12:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:34:58.267 12:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:58.267 12:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:34:58.528 12:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:58.528 12:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:34:58.528 12:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:34:58.528 12:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:34:58.528 12:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:34:58.528 12:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:34:58.528 12:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:34:58.528 12:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:34:58.528 12:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:34:58.528 12:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:34:58.528 12:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:34:58.528 12:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:34:58.528 12:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:34:58.528 12:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:58.528 12:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:58.528 12:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:34:58.528 12:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:58.528 12:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:58.528 12:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:58.528 12:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:34:58.528 00:34:58.528 real 0m2.925s 00:34:58.528 user 0m6.357s 00:34:58.528 sys 0m0.899s 00:34:58.528 ************************************ 00:34:58.528 12:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:58.528 12:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:58.528 END TEST nvmf_bdevio 00:34:58.528 ************************************ 00:34:58.528 12:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:34:58.528 00:34:58.528 real 3m22.990s 00:34:58.528 user 9m6.770s 00:34:58.528 sys 0m57.586s 00:34:58.528 12:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:58.528 12:33:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:58.528 ************************************ 00:34:58.528 END TEST nvmf_target_core_interrupt_mode 00:34:58.528 ************************************ 00:34:58.788 12:33:54 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /home/vagrant/spdk_repo/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:58.788 12:33:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:58.788 12:33:54 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:58.788 12:33:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:58.788 ************************************ 00:34:58.788 START TEST nvmf_interrupt 00:34:58.788 ************************************ 00:34:58.788 12:33:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:58.788 * Looking for test storage... 00:34:58.788 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:34:58.788 12:33:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:58.788 12:33:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:34:58.788 12:33:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:58.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:58.789 --rc genhtml_branch_coverage=1 00:34:58.789 --rc genhtml_function_coverage=1 00:34:58.789 --rc genhtml_legend=1 00:34:58.789 --rc geninfo_all_blocks=1 00:34:58.789 --rc geninfo_unexecuted_blocks=1 00:34:58.789 00:34:58.789 ' 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:58.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:58.789 --rc genhtml_branch_coverage=1 00:34:58.789 --rc genhtml_function_coverage=1 00:34:58.789 --rc genhtml_legend=1 00:34:58.789 --rc geninfo_all_blocks=1 00:34:58.789 --rc geninfo_unexecuted_blocks=1 00:34:58.789 00:34:58.789 ' 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:58.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:58.789 --rc genhtml_branch_coverage=1 00:34:58.789 --rc genhtml_function_coverage=1 00:34:58.789 --rc genhtml_legend=1 00:34:58.789 --rc geninfo_all_blocks=1 00:34:58.789 --rc geninfo_unexecuted_blocks=1 00:34:58.789 00:34:58.789 ' 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:58.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:58.789 --rc genhtml_branch_coverage=1 00:34:58.789 --rc genhtml_function_coverage=1 00:34:58.789 --rc genhtml_legend=1 00:34:58.789 --rc geninfo_all_blocks=1 00:34:58.789 --rc geninfo_unexecuted_blocks=1 00:34:58.789 00:34:58.789 ' 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@460 -- # nvmf_veth_init 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:34:58.789 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:34:58.790 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:58.790 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:34:58.790 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:34:58.790 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:34:58.790 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:34:58.790 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:34:58.790 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:34:58.790 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:58.790 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:34:58.790 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:34:58.790 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:34:58.790 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:34:58.790 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:34:58.790 Cannot find device "nvmf_init_br" 00:34:58.790 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@162 -- # true 00:34:58.790 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:34:58.790 Cannot find device "nvmf_init_br2" 00:34:58.790 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@163 -- # true 00:34:58.790 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:34:58.790 Cannot find device "nvmf_tgt_br" 00:34:58.790 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@164 -- # true 00:34:58.790 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:34:58.790 Cannot find device "nvmf_tgt_br2" 00:34:58.790 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@165 -- # true 00:34:58.790 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:34:58.790 Cannot find device "nvmf_init_br" 00:34:58.790 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@166 -- # true 00:34:58.790 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:34:58.790 Cannot find device "nvmf_init_br2" 00:34:58.790 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@167 -- # true 00:34:58.790 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:34:58.790 Cannot find device "nvmf_tgt_br" 00:34:58.790 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@168 -- # true 00:34:58.790 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:34:58.790 Cannot find device "nvmf_tgt_br2" 00:34:58.790 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@169 -- # true 00:34:58.790 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:34:59.051 Cannot find device "nvmf_br" 00:34:59.051 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@170 -- # true 00:34:59.051 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:34:59.051 Cannot find device "nvmf_init_if" 00:34:59.051 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@171 -- # true 00:34:59.051 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:34:59.051 Cannot find device "nvmf_init_if2" 00:34:59.051 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@172 -- # true 00:34:59.051 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:34:59.051 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:59.051 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@173 -- # true 00:34:59.051 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:34:59.051 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:34:59.051 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@174 -- # true 00:34:59.051 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:34:59.051 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:34:59.051 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:34:59.051 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:34:59.051 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:34:59.051 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:34:59.051 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:34:59.051 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:34:59.051 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:34:59.051 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:34:59.051 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:34:59.051 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:34:59.051 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:34:59.051 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:34:59.051 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:34:59.051 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:34:59.051 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:34:59.051 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:34:59.051 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:34:59.051 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:34:59.051 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:34:59.051 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:34:59.051 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:34:59.051 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:34:59.051 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:34:59.052 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:34:59.052 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:34:59.052 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:34:59.052 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:34:59.052 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:34:59.052 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:34:59.052 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:34:59.052 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:34:59.052 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:34:59.052 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:34:59.052 00:34:59.052 --- 10.0.0.3 ping statistics --- 00:34:59.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:59.052 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:34:59.052 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:34:59.052 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:34:59.052 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:34:59.052 00:34:59.052 --- 10.0.0.4 ping statistics --- 00:34:59.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:59.052 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:34:59.052 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:34:59.052 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:59.052 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:34:59.052 00:34:59.052 --- 10.0.0.1 ping statistics --- 00:34:59.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:59.052 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:34:59.052 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:34:59.052 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:59.052 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:34:59.052 00:34:59.052 --- 10.0.0.2 ping statistics --- 00:34:59.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:59.052 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:34:59.052 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:59.052 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@461 -- # return 0 00:34:59.052 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:59.052 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:59.052 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:59.052 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:59.052 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:59.052 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:59.052 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:59.052 12:33:54 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:34:59.052 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:59.052 12:33:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:59.052 12:33:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:59.052 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=106269 00:34:59.052 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 106269 00:34:59.052 12:33:54 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:34:59.052 12:33:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 106269 ']' 00:34:59.052 12:33:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:59.052 12:33:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:59.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:59.052 12:33:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:59.052 12:33:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:59.052 12:33:54 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:59.052 [2024-11-25 12:33:54.922876] thread.c:3055:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:59.052 [2024-11-25 12:33:54.923484] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:34:59.052 [2024-11-25 12:33:54.923516] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:59.311 [2024-11-25 12:33:55.060662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:59.311 [2024-11-25 12:33:55.101512] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:59.311 [2024-11-25 12:33:55.101549] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:59.311 [2024-11-25 12:33:55.101555] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:59.311 [2024-11-25 12:33:55.101560] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:59.311 [2024-11-25 12:33:55.101566] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:59.311 [2024-11-25 12:33:55.102481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:59.311 [2024-11-25 12:33:55.102596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:59.311 [2024-11-25 12:33:55.171809] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:59.311 [2024-11-25 12:33:55.172102] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:59.311 [2024-11-25 12:33:55.172536] thread.c:2116:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:00.252 12:33:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:00.252 12:33:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:35:00.252 12:33:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:00.252 12:33:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:00.252 12:33:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:00.252 12:33:55 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:00.252 12:33:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:35:00.252 12:33:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:35:00.252 12:33:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:35:00.252 12:33:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:35:00.252 5000+0 records in 00:35:00.252 5000+0 records out 00:35:00.252 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0203492 s, 503 MB/s 00:35:00.252 12:33:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aiofile AIO0 2048 00:35:00.252 12:33:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.252 12:33:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:00.252 AIO0 00:35:00.252 12:33:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.252 12:33:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:35:00.252 12:33:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.252 12:33:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:00.252 [2024-11-25 12:33:55.907486] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:00.252 12:33:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.252 12:33:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:00.252 12:33:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.252 12:33:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:00.252 12:33:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.252 12:33:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:35:00.252 12:33:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.252 12:33:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:00.252 12:33:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.252 12:33:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:35:00.252 12:33:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.252 12:33:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:00.252 [2024-11-25 12:33:55.935891] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:35:00.252 12:33:55 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.252 12:33:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:00.252 12:33:55 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 106269 0 00:35:00.252 12:33:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 106269 0 idle 00:35:00.252 12:33:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106269 00:35:00.252 12:33:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:00.252 12:33:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:00.252 12:33:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:00.252 12:33:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:00.252 12:33:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:00.252 12:33:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:00.252 12:33:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:00.252 12:33:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:00.252 12:33:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:00.252 12:33:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:00.252 12:33:55 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106269 -w 256 00:35:00.252 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106269 root 20 0 64.2g 45312 33024 S 0.0 0.4 0:00.19 reactor_0' 00:35:00.252 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106269 root 20 0 64.2g 45312 33024 S 0.0 0.4 0:00.19 reactor_0 00:35:00.252 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:00.252 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:00.252 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:00.252 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:00.252 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:00.252 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:00.252 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:00.252 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:00.252 12:33:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:00.252 12:33:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 106269 1 00:35:00.252 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 106269 1 idle 00:35:00.252 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106269 00:35:00.252 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:00.252 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:00.252 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:00.252 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:00.252 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:00.252 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:00.252 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:00.252 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:00.252 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:00.252 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106269 -w 256 00:35:00.252 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:00.512 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106273 root 20 0 64.2g 45312 33024 S 0.0 0.4 0:00.00 reactor_1' 00:35:00.512 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106273 root 20 0 64.2g 45312 33024 S 0.0 0.4 0:00.00 reactor_1 00:35:00.512 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:00.512 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:00.512 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:00.512 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:00.512 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:00.512 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:00.512 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:00.512 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:00.512 12:33:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:35:00.512 12:33:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=106345 00:35:00.512 12:33:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:00.512 12:33:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:00.512 12:33:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:00.512 12:33:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 106269 0 00:35:00.512 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 106269 0 busy 00:35:00.512 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106269 00:35:00.512 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:00.512 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:00.512 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:00.512 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:00.512 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:00.512 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:00.512 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:00.512 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:00.512 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106269 -w 256 00:35:00.512 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:00.772 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106269 root 20 0 64.2g 45696 33152 S 0.0 0.4 0:00.20 reactor_0' 00:35:00.772 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106269 root 20 0 64.2g 45696 33152 S 0.0 0.4 0:00.20 reactor_0 00:35:00.772 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:00.772 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:00.772 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:00.772 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:00.773 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:00.773 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:00.773 12:33:56 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:35:01.713 12:33:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:35:01.713 12:33:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:01.713 12:33:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106269 -w 256 00:35:01.713 12:33:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:01.974 12:33:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106269 root 20 0 64.2g 46592 33280 R 99.9 0.4 0:01.44 reactor_0' 00:35:01.974 12:33:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106269 root 20 0 64.2g 46592 33280 R 99.9 0.4 0:01.44 reactor_0 00:35:01.974 12:33:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:01.974 12:33:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:01.974 12:33:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:35:01.974 12:33:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:35:01.974 12:33:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:01.974 12:33:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:01.974 12:33:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:01.974 12:33:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:01.974 12:33:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:01.974 12:33:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:01.974 12:33:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 106269 1 00:35:01.974 12:33:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 106269 1 busy 00:35:01.974 12:33:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106269 00:35:01.974 12:33:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:01.974 12:33:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:01.974 12:33:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:01.974 12:33:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:01.974 12:33:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:01.974 12:33:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:01.974 12:33:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:01.974 12:33:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:01.974 12:33:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106269 -w 256 00:35:01.974 12:33:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:01.974 12:33:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106273 root 20 0 64.2g 46592 33280 R 60.0 0.4 0:00.74 reactor_1' 00:35:01.974 12:33:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106273 root 20 0 64.2g 46592 33280 R 60.0 0.4 0:00.74 reactor_1 00:35:01.974 12:33:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:01.974 12:33:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:01.974 12:33:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=60.0 00:35:01.974 12:33:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=60 00:35:01.974 12:33:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:01.974 12:33:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:01.974 12:33:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:01.974 12:33:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:01.974 12:33:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 106345 00:35:11.969 Initializing NVMe Controllers 00:35:11.969 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:35:11.969 Controller IO queue size 256, less than required. 00:35:11.969 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:11.969 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:35:11.969 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:35:11.969 Initialization complete. Launching workers. 00:35:11.969 ======================================================== 00:35:11.969 Latency(us) 00:35:11.969 Device Information : IOPS MiB/s Average min max 00:35:11.969 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 8669.30 33.86 29563.98 10261.85 86563.19 00:35:11.969 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 8740.90 34.14 29316.51 10076.73 84839.36 00:35:11.969 ======================================================== 00:35:11.969 Total : 17410.19 68.01 29439.74 10076.73 86563.19 00:35:11.969 00:35:11.969 12:34:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:11.969 12:34:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 106269 0 00:35:11.969 12:34:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 106269 0 idle 00:35:11.969 12:34:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106269 00:35:11.969 12:34:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:11.969 12:34:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:11.969 12:34:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:11.969 12:34:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:11.969 12:34:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:11.969 12:34:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:11.969 12:34:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:11.969 12:34:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:11.969 12:34:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:11.969 12:34:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106269 -w 256 00:35:11.969 12:34:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:11.969 12:34:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106269 root 20 0 64.2g 46592 33280 S 6.7 0.4 0:12.46 reactor_0' 00:35:11.969 12:34:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:11.969 12:34:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106269 root 20 0 64.2g 46592 33280 S 6.7 0.4 0:12.46 reactor_0 00:35:11.970 12:34:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:11.970 12:34:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:35:11.970 12:34:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:35:11.970 12:34:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:11.970 12:34:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:11.970 12:34:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:11.970 12:34:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:11.970 12:34:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:11.970 12:34:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 106269 1 00:35:11.970 12:34:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 106269 1 idle 00:35:11.970 12:34:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106269 00:35:11.970 12:34:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:11.970 12:34:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:11.970 12:34:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:11.970 12:34:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:11.970 12:34:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:11.970 12:34:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:11.970 12:34:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:11.970 12:34:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:11.970 12:34:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:11.970 12:34:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106269 -w 256 00:35:11.970 12:34:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:11.970 12:34:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106273 root 20 0 64.2g 46592 33280 S 0.0 0.4 0:06.17 reactor_1' 00:35:11.970 12:34:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:11.970 12:34:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106273 root 20 0 64.2g 46592 33280 S 0.0 0.4 0:06.17 reactor_1 00:35:11.970 12:34:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:11.970 12:34:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:11.970 12:34:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:11.970 12:34:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:11.970 12:34:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:11.970 12:34:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:11.970 12:34:06 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:11.970 12:34:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid=5934b511-67a7-4c73-84f0-52fdf36bb79d -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:35:11.970 12:34:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:35:11.970 12:34:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:35:11.970 12:34:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:35:11.970 12:34:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:35:11.970 12:34:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:35:13.355 12:34:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:35:13.355 12:34:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:35:13.355 12:34:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:35:13.355 12:34:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:35:13.355 12:34:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:35:13.355 12:34:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:35:13.355 12:34:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:13.355 12:34:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 106269 0 00:35:13.355 12:34:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 106269 0 idle 00:35:13.356 12:34:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106269 00:35:13.356 12:34:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:13.356 12:34:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:13.356 12:34:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:13.356 12:34:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:13.356 12:34:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:13.356 12:34:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:13.356 12:34:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:13.356 12:34:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:13.356 12:34:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:13.356 12:34:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106269 -w 256 00:35:13.356 12:34:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:13.356 12:34:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106269 root 20 0 64.2g 48768 33280 S 0.0 0.4 0:12.50 reactor_0' 00:35:13.356 12:34:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106269 root 20 0 64.2g 48768 33280 S 0.0 0.4 0:12.50 reactor_0 00:35:13.356 12:34:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:13.356 12:34:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:13.356 12:34:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:13.356 12:34:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:13.356 12:34:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:13.356 12:34:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:13.356 12:34:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:13.356 12:34:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:13.356 12:34:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:13.356 12:34:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 106269 1 00:35:13.356 12:34:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 106269 1 idle 00:35:13.356 12:34:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=106269 00:35:13.356 12:34:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:13.356 12:34:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:13.356 12:34:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:13.356 12:34:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:13.356 12:34:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:13.356 12:34:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:13.356 12:34:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:13.356 12:34:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:13.356 12:34:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:13.356 12:34:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 106269 -w 256 00:35:13.356 12:34:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:13.617 12:34:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 106273 root 20 0 64.2g 48768 33280 S 0.0 0.4 0:06.17 reactor_1' 00:35:13.617 12:34:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:13.617 12:34:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:13.617 12:34:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 106273 root 20 0 64.2g 48768 33280 S 0.0 0.4 0:06.17 reactor_1 00:35:13.617 12:34:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:13.617 12:34:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:13.617 12:34:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:13.617 12:34:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:13.617 12:34:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:13.617 12:34:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:13.617 12:34:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:13.617 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:13.617 12:34:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:13.617 12:34:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:35:13.617 12:34:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:35:13.617 12:34:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:13.617 12:34:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:35:13.617 12:34:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:13.617 12:34:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:35:13.617 12:34:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:35:13.617 12:34:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:35:13.617 12:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:13.617 12:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:35:13.878 12:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:13.878 12:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:35:13.878 12:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:13.878 12:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:13.878 rmmod nvme_tcp 00:35:13.878 rmmod nvme_fabrics 00:35:13.878 rmmod nvme_keyring 00:35:13.878 12:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:13.878 12:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:35:13.878 12:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:35:13.878 12:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 106269 ']' 00:35:13.878 12:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 106269 00:35:13.878 12:34:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 106269 ']' 00:35:13.878 12:34:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 106269 00:35:13.878 12:34:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:35:13.878 12:34:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:13.878 12:34:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 106269 00:35:13.878 12:34:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:13.878 12:34:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:13.878 killing process with pid 106269 00:35:13.878 12:34:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 106269' 00:35:13.878 12:34:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 106269 00:35:13.878 12:34:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 106269 00:35:14.136 12:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:14.136 12:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:14.136 12:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:14.136 12:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:35:14.136 12:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:14.136 12:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:35:14.136 12:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:35:14.136 12:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:14.136 12:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:35:14.136 12:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:35:14.136 12:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:35:14.136 12:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:35:14.136 12:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:35:14.136 12:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:35:14.136 12:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:35:14.137 12:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:35:14.137 12:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:35:14.137 12:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:35:14.137 12:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:35:14.137 12:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:35:14.137 12:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:35:14.137 12:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:35:14.137 12:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@246 -- # remove_spdk_ns 00:35:14.137 12:34:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:14.137 12:34:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:14.137 12:34:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:14.397 12:34:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@300 -- # return 0 00:35:14.397 00:35:14.397 real 0m15.587s 00:35:14.397 user 0m28.530s 00:35:14.397 sys 0m5.466s 00:35:14.397 12:34:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:14.397 12:34:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:14.397 ************************************ 00:35:14.397 END TEST nvmf_interrupt 00:35:14.397 ************************************ 00:35:14.397 00:35:14.397 real 19m2.501s 00:35:14.397 user 50m14.689s 00:35:14.397 sys 3m40.383s 00:35:14.397 12:34:10 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:14.397 ************************************ 00:35:14.397 END TEST nvmf_tcp 00:35:14.397 ************************************ 00:35:14.397 12:34:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:14.397 12:34:10 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:35:14.397 12:34:10 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:14.397 12:34:10 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:14.397 12:34:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:14.397 12:34:10 -- common/autotest_common.sh@10 -- # set +x 00:35:14.397 ************************************ 00:35:14.397 START TEST spdkcli_nvmf_tcp 00:35:14.397 ************************************ 00:35:14.397 12:34:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:14.397 * Looking for test storage... 00:35:14.397 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:35:14.397 12:34:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:14.397 12:34:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:14.397 12:34:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:35:14.397 12:34:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:14.397 12:34:10 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:14.397 12:34:10 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:14.397 12:34:10 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:14.397 12:34:10 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:35:14.397 12:34:10 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:35:14.397 12:34:10 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:35:14.397 12:34:10 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:35:14.397 12:34:10 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:35:14.397 12:34:10 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:35:14.397 12:34:10 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:35:14.397 12:34:10 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:14.397 12:34:10 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:35:14.397 12:34:10 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:35:14.397 12:34:10 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:14.397 12:34:10 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:14.397 12:34:10 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:35:14.397 12:34:10 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:35:14.397 12:34:10 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:14.397 12:34:10 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:35:14.397 12:34:10 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:35:14.397 12:34:10 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:35:14.397 12:34:10 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:35:14.397 12:34:10 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:14.397 12:34:10 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:35:14.397 12:34:10 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:35:14.397 12:34:10 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:14.397 12:34:10 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:14.397 12:34:10 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:35:14.397 12:34:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:14.397 12:34:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:14.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:14.397 --rc genhtml_branch_coverage=1 00:35:14.397 --rc genhtml_function_coverage=1 00:35:14.397 --rc genhtml_legend=1 00:35:14.397 --rc geninfo_all_blocks=1 00:35:14.397 --rc geninfo_unexecuted_blocks=1 00:35:14.397 00:35:14.397 ' 00:35:14.397 12:34:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:14.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:14.397 --rc genhtml_branch_coverage=1 00:35:14.397 --rc genhtml_function_coverage=1 00:35:14.397 --rc genhtml_legend=1 00:35:14.397 --rc geninfo_all_blocks=1 00:35:14.397 --rc geninfo_unexecuted_blocks=1 00:35:14.397 00:35:14.397 ' 00:35:14.397 12:34:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:14.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:14.397 --rc genhtml_branch_coverage=1 00:35:14.397 --rc genhtml_function_coverage=1 00:35:14.397 --rc genhtml_legend=1 00:35:14.397 --rc geninfo_all_blocks=1 00:35:14.397 --rc geninfo_unexecuted_blocks=1 00:35:14.397 00:35:14.397 ' 00:35:14.397 12:34:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:14.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:14.397 --rc genhtml_branch_coverage=1 00:35:14.397 --rc genhtml_function_coverage=1 00:35:14.397 --rc genhtml_legend=1 00:35:14.397 --rc geninfo_all_blocks=1 00:35:14.397 --rc geninfo_unexecuted_blocks=1 00:35:14.397 00:35:14.397 ' 00:35:14.397 12:34:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:35:14.397 12:34:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:14.398 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=106694 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 106694 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 106694 ']' 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:14.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:14.398 12:34:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:14.658 [2024-11-25 12:34:10.295986] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:35:14.658 [2024-11-25 12:34:10.296041] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106694 ] 00:35:14.658 [2024-11-25 12:34:10.430513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:14.658 [2024-11-25 12:34:10.474865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:14.658 [2024-11-25 12:34:10.474998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:15.601 12:34:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:15.601 12:34:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:35:15.601 12:34:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:35:15.601 12:34:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:15.601 12:34:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:15.601 12:34:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:35:15.601 12:34:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:35:15.601 12:34:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:35:15.601 12:34:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:15.601 12:34:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:15.601 12:34:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:35:15.601 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:35:15.601 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:35:15.601 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:35:15.601 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:35:15.601 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:35:15.601 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:35:15.601 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:15.601 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:35:15.601 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:35:15.601 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:15.601 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:15.602 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:35:15.602 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:15.602 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:15.602 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:35:15.602 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:15.602 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:15.602 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:15.602 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:15.602 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:35:15.602 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:35:15.602 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:15.602 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:35:15.602 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:15.602 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:35:15.602 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:35:15.602 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:35:15.602 ' 00:35:18.147 [2024-11-25 12:34:13.829221] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:19.529 [2024-11-25 12:34:15.110156] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:35:22.065 [2024-11-25 12:34:17.439529] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:35:23.976 [2024-11-25 12:34:19.440583] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:35:25.396 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:35:25.396 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:35:25.396 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:35:25.396 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:35:25.396 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:35:25.396 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:35:25.396 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:35:25.396 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:25.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:35:25.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:35:25.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:25.396 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:25.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:35:25.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:25.396 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:25.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:35:25.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:25.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:25.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:25.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:25.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:35:25.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:35:25.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:25.396 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:35:25.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:25.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:35:25.397 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:35:25.397 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:35:25.397 12:34:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:35:25.397 12:34:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:25.397 12:34:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:25.397 12:34:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:35:25.397 12:34:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:25.397 12:34:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:25.397 12:34:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:35:25.397 12:34:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:35:25.654 12:34:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:35:25.911 12:34:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:35:25.911 12:34:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:35:25.911 12:34:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:25.911 12:34:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:25.911 12:34:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:35:25.911 12:34:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:25.911 12:34:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:25.911 12:34:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:35:25.911 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:35:25.911 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:25.911 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:35:25.911 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:35:25.911 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:35:25.911 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:35:25.911 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:25.911 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:35:25.911 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:35:25.911 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:35:25.911 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:35:25.911 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:35:25.911 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:35:25.911 ' 00:35:31.172 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:35:31.172 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:35:31.172 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:31.172 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:35:31.172 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:35:31.172 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:35:31.172 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:35:31.172 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:31.172 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:35:31.172 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:35:31.172 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:35:31.172 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:35:31.172 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:35:31.172 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:35:31.172 12:34:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:35:31.172 12:34:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:31.172 12:34:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:31.172 12:34:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 106694 00:35:31.172 12:34:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 106694 ']' 00:35:31.172 12:34:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 106694 00:35:31.172 12:34:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:35:31.172 12:34:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:31.172 12:34:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 106694 00:35:31.172 12:34:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:31.172 12:34:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:31.172 killing process with pid 106694 00:35:31.172 12:34:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 106694' 00:35:31.172 12:34:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 106694 00:35:31.172 12:34:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 106694 00:35:31.432 12:34:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:35:31.432 12:34:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:35:31.432 12:34:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 106694 ']' 00:35:31.432 12:34:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 106694 00:35:31.432 12:34:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 106694 ']' 00:35:31.432 12:34:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 106694 00:35:31.432 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (106694) - No such process 00:35:31.432 Process with pid 106694 is not found 00:35:31.432 12:34:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 106694 is not found' 00:35:31.432 12:34:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:35:31.432 12:34:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:35:31.432 12:34:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:35:31.432 00:35:31.432 real 0m17.018s 00:35:31.432 user 0m36.470s 00:35:31.432 sys 0m0.605s 00:35:31.432 12:34:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:31.432 12:34:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:31.432 ************************************ 00:35:31.432 END TEST spdkcli_nvmf_tcp 00:35:31.432 ************************************ 00:35:31.432 12:34:27 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:31.432 12:34:27 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:31.432 12:34:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:31.432 12:34:27 -- common/autotest_common.sh@10 -- # set +x 00:35:31.432 ************************************ 00:35:31.432 START TEST nvmf_identify_passthru 00:35:31.432 ************************************ 00:35:31.432 12:34:27 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:31.432 * Looking for test storage... 00:35:31.432 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:35:31.432 12:34:27 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:31.432 12:34:27 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:35:31.432 12:34:27 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:31.432 12:34:27 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:31.432 12:34:27 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:31.432 12:34:27 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:31.432 12:34:27 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:31.432 12:34:27 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:35:31.432 12:34:27 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:35:31.432 12:34:27 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:35:31.432 12:34:27 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:35:31.432 12:34:27 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:35:31.432 12:34:27 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:35:31.432 12:34:27 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:35:31.432 12:34:27 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:31.432 12:34:27 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:35:31.432 12:34:27 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:35:31.432 12:34:27 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:31.432 12:34:27 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:31.432 12:34:27 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:35:31.432 12:34:27 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:35:31.432 12:34:27 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:31.432 12:34:27 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:35:31.432 12:34:27 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:35:31.432 12:34:27 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:35:31.432 12:34:27 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:35:31.432 12:34:27 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:31.432 12:34:27 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:35:31.691 12:34:27 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:35:31.691 12:34:27 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:31.691 12:34:27 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:31.691 12:34:27 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:35:31.691 12:34:27 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:31.691 12:34:27 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:31.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.691 --rc genhtml_branch_coverage=1 00:35:31.691 --rc genhtml_function_coverage=1 00:35:31.691 --rc genhtml_legend=1 00:35:31.691 --rc geninfo_all_blocks=1 00:35:31.692 --rc geninfo_unexecuted_blocks=1 00:35:31.692 00:35:31.692 ' 00:35:31.692 12:34:27 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:31.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.692 --rc genhtml_branch_coverage=1 00:35:31.692 --rc genhtml_function_coverage=1 00:35:31.692 --rc genhtml_legend=1 00:35:31.692 --rc geninfo_all_blocks=1 00:35:31.692 --rc geninfo_unexecuted_blocks=1 00:35:31.692 00:35:31.692 ' 00:35:31.692 12:34:27 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:31.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.692 --rc genhtml_branch_coverage=1 00:35:31.692 --rc genhtml_function_coverage=1 00:35:31.692 --rc genhtml_legend=1 00:35:31.692 --rc geninfo_all_blocks=1 00:35:31.692 --rc geninfo_unexecuted_blocks=1 00:35:31.692 00:35:31.692 ' 00:35:31.692 12:34:27 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:31.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.692 --rc genhtml_branch_coverage=1 00:35:31.692 --rc genhtml_function_coverage=1 00:35:31.692 --rc genhtml_legend=1 00:35:31.692 --rc geninfo_all_blocks=1 00:35:31.692 --rc geninfo_unexecuted_blocks=1 00:35:31.692 00:35:31.692 ' 00:35:31.692 12:34:27 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:35:31.692 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:35:31.692 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:31.692 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:31.692 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:31.692 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:31.692 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:31.692 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:31.692 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:31.692 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:31.692 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:31.692 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:31.692 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:35:31.692 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:35:31.692 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:31.692 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:31.692 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:35:31.692 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:31.692 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:31.692 12:34:27 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:31.692 12:34:27 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:31.692 12:34:27 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:31.692 12:34:27 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:31.692 12:34:27 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.692 12:34:27 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.692 12:34:27 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.692 12:34:27 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:31.692 12:34:27 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.692 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:35:31.692 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:31.692 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:31.692 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:31.692 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:31.692 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:31.692 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:31.692 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:31.692 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:31.692 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:31.692 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:31.692 12:34:27 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:31.692 12:34:27 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:31.692 12:34:27 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:31.692 12:34:27 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:31.692 12:34:27 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:31.692 12:34:27 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.693 12:34:27 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.693 12:34:27 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.693 12:34:27 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:31.693 12:34:27 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.693 12:34:27 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:31.693 12:34:27 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:31.693 12:34:27 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@460 -- # nvmf_veth_init 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:35:31.693 Cannot find device "nvmf_init_br" 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@162 -- # true 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:35:31.693 Cannot find device "nvmf_init_br2" 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@163 -- # true 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:35:31.693 Cannot find device "nvmf_tgt_br" 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@164 -- # true 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:35:31.693 Cannot find device "nvmf_tgt_br2" 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@165 -- # true 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:35:31.693 Cannot find device "nvmf_init_br" 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@166 -- # true 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:35:31.693 Cannot find device "nvmf_init_br2" 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@167 -- # true 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:35:31.693 Cannot find device "nvmf_tgt_br" 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@168 -- # true 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:35:31.693 Cannot find device "nvmf_tgt_br2" 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@169 -- # true 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:35:31.693 Cannot find device "nvmf_br" 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@170 -- # true 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:35:31.693 Cannot find device "nvmf_init_if" 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@171 -- # true 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:35:31.693 Cannot find device "nvmf_init_if2" 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@172 -- # true 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:35:31.693 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@173 -- # true 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:35:31.693 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@174 -- # true 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:35:31.693 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:35:31.694 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:35:31.694 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:35:31.694 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:35:31.694 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:35:31.694 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:35:31.694 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:35:31.694 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:35:31.694 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:35:31.694 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:35:31.694 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:35:31.694 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:35:31.694 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:35:31.694 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:35:31.694 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:35:31.694 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:35:31.694 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:35:31.694 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:35:31.694 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:35:31.694 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:35:31.694 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:35:31.694 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:35:31.694 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:35:31.694 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:35:31.694 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:35:31.694 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:35:31.694 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:35:31.694 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:35:31.694 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:35:31.955 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:35:31.955 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:35:31.955 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:35:31.955 00:35:31.955 --- 10.0.0.3 ping statistics --- 00:35:31.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:31.955 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:35:31.955 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:35:31.955 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:35:31.955 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.037 ms 00:35:31.955 00:35:31.955 --- 10.0.0.4 ping statistics --- 00:35:31.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:31.955 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:35:31.955 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:35:31.955 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:31.955 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:35:31.955 00:35:31.955 --- 10.0.0.1 ping statistics --- 00:35:31.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:31.955 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:35:31.955 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:35:31.955 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:31.955 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.030 ms 00:35:31.955 00:35:31.955 --- 10.0.0.2 ping statistics --- 00:35:31.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:31.955 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:35:31.955 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:31.955 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@461 -- # return 0 00:35:31.955 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:31.955 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:31.955 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:31.955 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:31.955 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:31.955 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:31.955 12:34:27 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:31.955 12:34:27 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:35:31.955 12:34:27 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:31.955 12:34:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:31.955 12:34:27 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:35:31.955 12:34:27 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:35:31.955 12:34:27 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:35:31.955 12:34:27 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:35:31.955 12:34:27 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:35:31.955 12:34:27 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:35:31.955 12:34:27 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:35:31.955 12:34:27 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:31.955 12:34:27 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:35:31.955 12:34:27 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:35:31.955 12:34:27 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:35:31.955 12:34:27 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:35:31.955 12:34:27 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:35:31.955 12:34:27 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:35:31.955 12:34:27 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:35:31.955 12:34:27 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:35:31.955 12:34:27 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:35:31.955 12:34:27 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:35:32.215 12:34:27 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:35:32.215 12:34:27 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:35:32.215 12:34:27 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:35:32.215 12:34:27 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:35:32.215 12:34:28 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:35:32.215 12:34:28 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:35:32.215 12:34:28 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:32.215 12:34:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:32.215 12:34:28 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:35:32.215 12:34:28 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:32.215 12:34:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:32.215 12:34:28 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=107219 00:35:32.215 12:34:28 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:32.215 12:34:28 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 107219 00:35:32.215 12:34:28 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 107219 ']' 00:35:32.215 12:34:28 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:35:32.215 12:34:28 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:32.215 12:34:28 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:32.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:32.215 12:34:28 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:32.215 12:34:28 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:32.215 12:34:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:32.216 [2024-11-25 12:34:28.084039] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:35:32.216 [2024-11-25 12:34:28.084097] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:32.474 [2024-11-25 12:34:28.223575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:32.474 [2024-11-25 12:34:28.266968] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:32.474 [2024-11-25 12:34:28.267005] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:32.474 [2024-11-25 12:34:28.267011] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:32.474 [2024-11-25 12:34:28.267017] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:32.474 [2024-11-25 12:34:28.267021] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:32.474 [2024-11-25 12:34:28.267937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:32.474 [2024-11-25 12:34:28.268110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:32.474 [2024-11-25 12:34:28.268487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:32.474 [2024-11-25 12:34:28.268590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:33.406 12:34:28 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:33.406 12:34:28 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:35:33.406 12:34:28 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:35:33.406 12:34:28 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.406 12:34:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:33.406 12:34:28 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.406 12:34:28 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:35:33.406 12:34:28 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.406 12:34:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:33.406 [2024-11-25 12:34:29.050793] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:35:33.406 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.406 12:34:29 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:33.406 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.406 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:33.406 [2024-11-25 12:34:29.060250] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:33.406 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.406 12:34:29 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:35:33.406 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:33.406 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:33.406 12:34:29 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:35:33.406 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.406 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:33.406 Nvme0n1 00:35:33.406 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.406 12:34:29 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:35:33.406 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.406 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:33.406 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.406 12:34:29 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:33.406 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.406 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:33.406 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.406 12:34:29 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:35:33.406 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.406 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:33.406 [2024-11-25 12:34:29.181902] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:35:33.406 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.406 12:34:29 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:35:33.406 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.406 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:33.406 [ 00:35:33.406 { 00:35:33.406 "allow_any_host": true, 00:35:33.406 "hosts": [], 00:35:33.406 "listen_addresses": [], 00:35:33.406 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:33.406 "subtype": "Discovery" 00:35:33.406 }, 00:35:33.406 { 00:35:33.406 "allow_any_host": true, 00:35:33.406 "hosts": [], 00:35:33.406 "listen_addresses": [ 00:35:33.406 { 00:35:33.406 "adrfam": "IPv4", 00:35:33.406 "traddr": "10.0.0.3", 00:35:33.406 "trsvcid": "4420", 00:35:33.406 "trtype": "TCP" 00:35:33.406 } 00:35:33.406 ], 00:35:33.406 "max_cntlid": 65519, 00:35:33.406 "max_namespaces": 1, 00:35:33.406 "min_cntlid": 1, 00:35:33.406 "model_number": "SPDK bdev Controller", 00:35:33.406 "namespaces": [ 00:35:33.406 { 00:35:33.406 "bdev_name": "Nvme0n1", 00:35:33.406 "name": "Nvme0n1", 00:35:33.406 "nguid": "9DADC546EE09487F99BD68D88AD362B6", 00:35:33.406 "nsid": 1, 00:35:33.406 "uuid": "9dadc546-ee09-487f-99bd-68d88ad362b6" 00:35:33.406 } 00:35:33.406 ], 00:35:33.406 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:33.406 "serial_number": "SPDK00000000000001", 00:35:33.406 "subtype": "NVMe" 00:35:33.406 } 00:35:33.406 ] 00:35:33.406 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.406 12:34:29 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:33.406 12:34:29 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:35:33.406 12:34:29 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:35:33.663 12:34:29 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:35:33.663 12:34:29 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:33.663 12:34:29 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:35:33.663 12:34:29 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:35:33.919 12:34:29 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:35:33.919 12:34:29 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:35:33.919 12:34:29 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:35:33.919 12:34:29 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:33.919 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.919 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:33.919 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.919 12:34:29 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:35:33.919 12:34:29 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:35:33.920 12:34:29 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:33.920 12:34:29 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:35:33.920 12:34:29 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:33.920 12:34:29 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:35:33.920 12:34:29 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:33.920 12:34:29 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:33.920 rmmod nvme_tcp 00:35:33.920 rmmod nvme_fabrics 00:35:33.920 rmmod nvme_keyring 00:35:33.920 12:34:29 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:33.920 12:34:29 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:35:33.920 12:34:29 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:35:33.920 12:34:29 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 107219 ']' 00:35:33.920 12:34:29 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 107219 00:35:33.920 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 107219 ']' 00:35:33.920 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 107219 00:35:33.920 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:35:33.920 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:34.180 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107219 00:35:34.180 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:34.180 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:34.180 killing process with pid 107219 00:35:34.180 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107219' 00:35:34.180 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 107219 00:35:34.180 12:34:29 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 107219 00:35:34.180 12:34:29 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:34.180 12:34:29 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:34.180 12:34:29 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:34.180 12:34:29 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:35:34.180 12:34:29 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:35:34.180 12:34:29 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:34.180 12:34:29 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:35:34.180 12:34:29 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:34.180 12:34:29 nvmf_identify_passthru -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:35:34.180 12:34:29 nvmf_identify_passthru -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:35:34.180 12:34:29 nvmf_identify_passthru -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:35:34.180 12:34:29 nvmf_identify_passthru -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:35:34.180 12:34:30 nvmf_identify_passthru -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:35:34.180 12:34:30 nvmf_identify_passthru -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:35:34.180 12:34:30 nvmf_identify_passthru -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:35:34.180 12:34:30 nvmf_identify_passthru -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:35:34.181 12:34:30 nvmf_identify_passthru -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:35:34.181 12:34:30 nvmf_identify_passthru -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:35:34.441 12:34:30 nvmf_identify_passthru -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:35:34.441 12:34:30 nvmf_identify_passthru -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:35:34.441 12:34:30 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:35:34.441 12:34:30 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:35:34.441 12:34:30 nvmf_identify_passthru -- nvmf/common.sh@246 -- # remove_spdk_ns 00:35:34.441 12:34:30 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:34.441 12:34:30 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:34.441 12:34:30 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:34.441 12:34:30 nvmf_identify_passthru -- nvmf/common.sh@300 -- # return 0 00:35:34.441 00:35:34.441 real 0m3.016s 00:35:34.441 user 0m7.238s 00:35:34.441 sys 0m0.681s 00:35:34.441 12:34:30 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:34.441 ************************************ 00:35:34.441 END TEST nvmf_identify_passthru 00:35:34.441 ************************************ 00:35:34.441 12:34:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:34.441 12:34:30 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:35:34.441 12:34:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:34.441 12:34:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:34.441 12:34:30 -- common/autotest_common.sh@10 -- # set +x 00:35:34.441 ************************************ 00:35:34.441 START TEST nvmf_dif 00:35:34.441 ************************************ 00:35:34.441 12:34:30 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:35:34.441 * Looking for test storage... 00:35:34.441 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:35:34.441 12:34:30 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:34.441 12:34:30 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:35:34.441 12:34:30 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:34.700 12:34:30 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:34.700 12:34:30 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:34.700 12:34:30 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:34.700 12:34:30 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:34.700 12:34:30 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:35:34.700 12:34:30 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:35:34.700 12:34:30 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:35:34.700 12:34:30 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:35:34.700 12:34:30 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:35:34.700 12:34:30 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:35:34.700 12:34:30 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:35:34.700 12:34:30 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:34.700 12:34:30 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:35:34.700 12:34:30 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:35:34.700 12:34:30 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:34.700 12:34:30 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:34.700 12:34:30 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:35:34.700 12:34:30 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:35:34.700 12:34:30 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:34.700 12:34:30 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:35:34.700 12:34:30 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:35:34.700 12:34:30 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:35:34.700 12:34:30 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:35:34.700 12:34:30 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:34.700 12:34:30 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:35:34.700 12:34:30 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:35:34.700 12:34:30 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:34.700 12:34:30 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:34.700 12:34:30 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:35:34.700 12:34:30 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:34.700 12:34:30 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:34.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:34.700 --rc genhtml_branch_coverage=1 00:35:34.700 --rc genhtml_function_coverage=1 00:35:34.700 --rc genhtml_legend=1 00:35:34.700 --rc geninfo_all_blocks=1 00:35:34.700 --rc geninfo_unexecuted_blocks=1 00:35:34.700 00:35:34.700 ' 00:35:34.700 12:34:30 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:34.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:34.700 --rc genhtml_branch_coverage=1 00:35:34.700 --rc genhtml_function_coverage=1 00:35:34.700 --rc genhtml_legend=1 00:35:34.700 --rc geninfo_all_blocks=1 00:35:34.700 --rc geninfo_unexecuted_blocks=1 00:35:34.700 00:35:34.700 ' 00:35:34.700 12:34:30 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:34.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:34.700 --rc genhtml_branch_coverage=1 00:35:34.700 --rc genhtml_function_coverage=1 00:35:34.700 --rc genhtml_legend=1 00:35:34.700 --rc geninfo_all_blocks=1 00:35:34.700 --rc geninfo_unexecuted_blocks=1 00:35:34.700 00:35:34.700 ' 00:35:34.700 12:34:30 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:34.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:34.700 --rc genhtml_branch_coverage=1 00:35:34.700 --rc genhtml_function_coverage=1 00:35:34.700 --rc genhtml_legend=1 00:35:34.700 --rc geninfo_all_blocks=1 00:35:34.700 --rc geninfo_unexecuted_blocks=1 00:35:34.700 00:35:34.700 ' 00:35:34.700 12:34:30 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:35:34.701 12:34:30 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:35:34.701 12:34:30 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:34.701 12:34:30 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:34.701 12:34:30 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:34.701 12:34:30 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.701 12:34:30 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.701 12:34:30 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.701 12:34:30 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:35:34.701 12:34:30 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:34.701 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:34.701 12:34:30 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:35:34.701 12:34:30 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:34.701 12:34:30 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:34.701 12:34:30 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:35:34.701 12:34:30 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:34.701 12:34:30 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:34.701 12:34:30 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:35:34.701 Cannot find device "nvmf_init_br" 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@162 -- # true 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:35:34.701 Cannot find device "nvmf_init_br2" 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@163 -- # true 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:35:34.701 Cannot find device "nvmf_tgt_br" 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@164 -- # true 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:35:34.701 Cannot find device "nvmf_tgt_br2" 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@165 -- # true 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:35:34.701 Cannot find device "nvmf_init_br" 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@166 -- # true 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:35:34.701 Cannot find device "nvmf_init_br2" 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@167 -- # true 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:35:34.701 Cannot find device "nvmf_tgt_br" 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@168 -- # true 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:35:34.701 Cannot find device "nvmf_tgt_br2" 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@169 -- # true 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:35:34.701 Cannot find device "nvmf_br" 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@170 -- # true 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:35:34.701 Cannot find device "nvmf_init_if" 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@171 -- # true 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:35:34.701 Cannot find device "nvmf_init_if2" 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@172 -- # true 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:35:34.701 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@173 -- # true 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:35:34.701 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@174 -- # true 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:35:34.701 12:34:30 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:35:34.702 12:34:30 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:35:34.702 12:34:30 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:35:34.959 12:34:30 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:35:34.959 12:34:30 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:35:34.959 12:34:30 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:35:34.959 12:34:30 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:35:34.959 12:34:30 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:35:34.959 12:34:30 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:35:34.959 12:34:30 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:35:34.959 12:34:30 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:35:34.959 12:34:30 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:35:34.959 12:34:30 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:35:34.959 12:34:30 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:35:34.959 12:34:30 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:35:34.959 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:35:34.959 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:35:34.959 00:35:34.959 --- 10.0.0.3 ping statistics --- 00:35:34.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:34.959 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:35:34.959 12:34:30 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:35:34.959 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:35:34.959 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:35:34.959 00:35:34.959 --- 10.0.0.4 ping statistics --- 00:35:34.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:34.959 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:35:34.959 12:34:30 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:35:34.959 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:34.959 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:35:34.959 00:35:34.959 --- 10.0.0.1 ping statistics --- 00:35:34.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:34.959 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:35:34.959 12:34:30 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:35:34.959 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:34.959 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:35:34.959 00:35:34.959 --- 10.0.0.2 ping statistics --- 00:35:34.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:34.959 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:35:34.959 12:34:30 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:34.959 12:34:30 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:35:34.959 12:34:30 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:35:34.959 12:34:30 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:35:35.217 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:35.217 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:35:35.217 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:35:35.217 12:34:30 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:35.217 12:34:30 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:35.217 12:34:30 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:35.217 12:34:30 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:35.217 12:34:30 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:35.217 12:34:30 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:35.217 12:34:30 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:35.217 12:34:30 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:35:35.217 12:34:30 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:35.217 12:34:30 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:35.217 12:34:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:35.217 12:34:30 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=107611 00:35:35.217 12:34:30 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 107611 00:35:35.217 12:34:30 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 107611 ']' 00:35:35.217 12:34:30 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:35.217 12:34:30 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:35.217 12:34:30 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:35.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:35.217 12:34:30 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:35.217 12:34:30 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:35.217 12:34:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:35.217 [2024-11-25 12:34:31.030200] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:35:35.217 [2024-11-25 12:34:31.030259] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:35.475 [2024-11-25 12:34:31.171550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:35.475 [2024-11-25 12:34:31.213550] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:35.475 [2024-11-25 12:34:31.213703] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:35.475 [2024-11-25 12:34:31.213761] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:35.475 [2024-11-25 12:34:31.213820] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:35.475 [2024-11-25 12:34:31.213862] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:35.475 [2024-11-25 12:34:31.214207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:36.039 12:34:31 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:36.039 12:34:31 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:35:36.039 12:34:31 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:36.039 12:34:31 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:36.039 12:34:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:36.039 12:34:31 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:36.039 12:34:31 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:35:36.039 12:34:31 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:36.039 12:34:31 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.039 12:34:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:36.039 [2024-11-25 12:34:31.897303] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:36.039 12:34:31 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.039 12:34:31 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:36.039 12:34:31 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:36.039 12:34:31 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:36.039 12:34:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:36.039 ************************************ 00:35:36.039 START TEST fio_dif_1_default 00:35:36.039 ************************************ 00:35:36.039 12:34:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:35:36.039 12:34:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:35:36.039 12:34:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:35:36.039 12:34:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:35:36.039 12:34:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:35:36.039 12:34:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:35:36.039 12:34:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:36.039 12:34:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.040 12:34:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:36.297 bdev_null0 00:35:36.297 12:34:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.297 12:34:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:36.297 12:34:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.297 12:34:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:36.297 12:34:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.297 12:34:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:36.297 12:34:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.297 12:34:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:36.297 12:34:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.297 12:34:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:35:36.297 12:34:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.297 12:34:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:36.297 [2024-11-25 12:34:31.941393] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:35:36.297 12:34:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.297 12:34:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:36.297 12:34:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:36.297 12:34:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:36.297 12:34:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:36.297 12:34:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:36.297 12:34:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:36.297 12:34:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:35:36.297 12:34:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:36.297 12:34:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:36.297 12:34:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:35:36.297 12:34:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:36.297 12:34:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:36.297 12:34:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:35:36.297 12:34:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:35:36.297 12:34:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:36.297 12:34:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:36.297 { 00:35:36.297 "params": { 00:35:36.297 "name": "Nvme$subsystem", 00:35:36.297 "trtype": "$TEST_TRANSPORT", 00:35:36.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:36.297 "adrfam": "ipv4", 00:35:36.297 "trsvcid": "$NVMF_PORT", 00:35:36.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:36.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:36.297 "hdgst": ${hdgst:-false}, 00:35:36.297 "ddgst": ${ddgst:-false} 00:35:36.297 }, 00:35:36.297 "method": "bdev_nvme_attach_controller" 00:35:36.297 } 00:35:36.297 EOF 00:35:36.297 )") 00:35:36.297 12:34:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:36.297 12:34:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:35:36.297 12:34:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:35:36.297 12:34:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:35:36.297 12:34:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:36.297 12:34:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:36.297 12:34:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:35:36.297 12:34:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:35:36.297 12:34:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:35:36.297 12:34:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:35:36.297 12:34:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:35:36.297 12:34:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:36.297 "params": { 00:35:36.297 "name": "Nvme0", 00:35:36.297 "trtype": "tcp", 00:35:36.297 "traddr": "10.0.0.3", 00:35:36.297 "adrfam": "ipv4", 00:35:36.297 "trsvcid": "4420", 00:35:36.297 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:36.297 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:36.297 "hdgst": false, 00:35:36.297 "ddgst": false 00:35:36.297 }, 00:35:36.297 "method": "bdev_nvme_attach_controller" 00:35:36.297 }' 00:35:36.297 12:34:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:36.297 12:34:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:36.297 12:34:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:36.297 12:34:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:36.297 12:34:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:36.297 12:34:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:36.297 12:34:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:36.297 12:34:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:36.297 12:34:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:35:36.297 12:34:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:36.297 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:36.297 fio-3.35 00:35:36.297 Starting 1 thread 00:35:48.528 00:35:48.528 filename0: (groupid=0, jobs=1): err= 0: pid=107696: Mon Nov 25 12:34:42 2024 00:35:48.528 read: IOPS=2578, BW=10.1MiB/s (10.6MB/s)(101MiB/10009msec) 00:35:48.528 slat (nsec): min=5481, max=35373, avg=5787.97, stdev=813.11 00:35:48.528 clat (usec): min=301, max=41988, avg=1535.41, stdev=6851.53 00:35:48.528 lat (usec): min=307, max=41994, avg=1541.20, stdev=6851.60 00:35:48.528 clat percentiles (usec): 00:35:48.528 | 1.00th=[ 314], 5.00th=[ 318], 10.00th=[ 322], 20.00th=[ 326], 00:35:48.528 | 30.00th=[ 326], 40.00th=[ 330], 50.00th=[ 334], 60.00th=[ 338], 00:35:48.528 | 70.00th=[ 338], 80.00th=[ 347], 90.00th=[ 367], 95.00th=[ 429], 00:35:48.528 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:35:48.528 | 99.99th=[42206] 00:35:48.528 bw ( KiB/s): min= 1280, max=16320, per=100.00%, avg=10323.20, stdev=4235.04, samples=20 00:35:48.529 iops : min= 320, max= 4080, avg=2580.80, stdev=1058.76, samples=20 00:35:48.529 lat (usec) : 500=97.02% 00:35:48.529 lat (msec) : 4=0.02%, 50=2.96% 00:35:48.529 cpu : usr=92.02%, sys=7.02%, ctx=32, majf=0, minf=9 00:35:48.529 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:48.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:48.529 issued rwts: total=25812,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:48.529 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:48.529 00:35:48.529 Run status group 0 (all jobs): 00:35:48.529 READ: bw=10.1MiB/s (10.6MB/s), 10.1MiB/s-10.1MiB/s (10.6MB/s-10.6MB/s), io=101MiB (106MB), run=10009-10009msec 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.529 00:35:48.529 real 0m10.895s 00:35:48.529 user 0m9.760s 00:35:48.529 sys 0m0.882s 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:48.529 ************************************ 00:35:48.529 END TEST fio_dif_1_default 00:35:48.529 ************************************ 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:48.529 12:34:42 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:48.529 12:34:42 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:48.529 12:34:42 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:48.529 12:34:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:48.529 ************************************ 00:35:48.529 START TEST fio_dif_1_multi_subsystems 00:35:48.529 ************************************ 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:48.529 bdev_null0 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:48.529 [2024-11-25 12:34:42.872457] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:48.529 bdev_null1 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:48.529 { 00:35:48.529 "params": { 00:35:48.529 "name": "Nvme$subsystem", 00:35:48.529 "trtype": "$TEST_TRANSPORT", 00:35:48.529 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:48.529 "adrfam": "ipv4", 00:35:48.529 "trsvcid": "$NVMF_PORT", 00:35:48.529 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:48.529 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:48.529 "hdgst": ${hdgst:-false}, 00:35:48.529 "ddgst": ${ddgst:-false} 00:35:48.529 }, 00:35:48.529 "method": "bdev_nvme_attach_controller" 00:35:48.529 } 00:35:48.529 EOF 00:35:48.529 )") 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:48.529 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:48.529 { 00:35:48.529 "params": { 00:35:48.530 "name": "Nvme$subsystem", 00:35:48.530 "trtype": "$TEST_TRANSPORT", 00:35:48.530 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:48.530 "adrfam": "ipv4", 00:35:48.530 "trsvcid": "$NVMF_PORT", 00:35:48.530 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:48.530 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:48.530 "hdgst": ${hdgst:-false}, 00:35:48.530 "ddgst": ${ddgst:-false} 00:35:48.530 }, 00:35:48.530 "method": "bdev_nvme_attach_controller" 00:35:48.530 } 00:35:48.530 EOF 00:35:48.530 )") 00:35:48.530 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:35:48.530 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:35:48.530 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:48.530 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:35:48.530 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:35:48.530 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:48.530 "params": { 00:35:48.530 "name": "Nvme0", 00:35:48.530 "trtype": "tcp", 00:35:48.530 "traddr": "10.0.0.3", 00:35:48.530 "adrfam": "ipv4", 00:35:48.530 "trsvcid": "4420", 00:35:48.530 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:48.530 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:48.530 "hdgst": false, 00:35:48.530 "ddgst": false 00:35:48.530 }, 00:35:48.530 "method": "bdev_nvme_attach_controller" 00:35:48.530 },{ 00:35:48.530 "params": { 00:35:48.530 "name": "Nvme1", 00:35:48.530 "trtype": "tcp", 00:35:48.530 "traddr": "10.0.0.3", 00:35:48.530 "adrfam": "ipv4", 00:35:48.530 "trsvcid": "4420", 00:35:48.530 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:48.530 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:48.530 "hdgst": false, 00:35:48.530 "ddgst": false 00:35:48.530 }, 00:35:48.530 "method": "bdev_nvme_attach_controller" 00:35:48.530 }' 00:35:48.530 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:48.530 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:48.530 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:48.530 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:48.530 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:48.530 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:48.530 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:48.530 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:48.530 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:35:48.530 12:34:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:48.530 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:48.530 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:48.530 fio-3.35 00:35:48.530 Starting 2 threads 00:35:58.492 00:35:58.492 filename0: (groupid=0, jobs=1): err= 0: pid=107861: Mon Nov 25 12:34:53 2024 00:35:58.492 read: IOPS=328, BW=1313KiB/s (1344kB/s)(12.8MiB/10018msec) 00:35:58.492 slat (nsec): min=5428, max=32601, avg=5808.47, stdev=1188.06 00:35:58.492 clat (usec): min=300, max=41325, avg=12170.64, stdev=18428.71 00:35:58.492 lat (usec): min=305, max=41332, avg=12176.45, stdev=18428.71 00:35:58.492 clat percentiles (usec): 00:35:58.492 | 1.00th=[ 310], 5.00th=[ 314], 10.00th=[ 318], 20.00th=[ 326], 00:35:58.492 | 30.00th=[ 330], 40.00th=[ 330], 50.00th=[ 334], 60.00th=[ 343], 00:35:58.492 | 70.00th=[ 355], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 00:35:58.492 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:35:58.492 | 99.99th=[41157] 00:35:58.492 bw ( KiB/s): min= 864, max= 2208, per=49.42%, avg=1330.37, stdev=311.24, samples=19 00:35:58.492 iops : min= 216, max= 552, avg=332.58, stdev=77.81, samples=19 00:35:58.492 lat (usec) : 500=70.56%, 750=0.12% 00:35:58.492 lat (msec) : 10=0.12%, 50=29.20% 00:35:58.492 cpu : usr=94.85%, sys=4.70%, ctx=17, majf=0, minf=0 00:35:58.492 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:58.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.492 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.492 issued rwts: total=3288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:58.492 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:58.492 filename1: (groupid=0, jobs=1): err= 0: pid=107862: Mon Nov 25 12:34:53 2024 00:35:58.492 read: IOPS=345, BW=1381KiB/s (1414kB/s)(13.5MiB/10002msec) 00:35:58.492 slat (nsec): min=4943, max=52139, avg=6288.11, stdev=2000.76 00:35:58.492 clat (usec): min=296, max=41331, avg=11571.31, stdev=18142.23 00:35:58.492 lat (usec): min=302, max=41337, avg=11577.59, stdev=18142.41 00:35:58.492 clat percentiles (usec): 00:35:58.492 | 1.00th=[ 310], 5.00th=[ 318], 10.00th=[ 318], 20.00th=[ 326], 00:35:58.492 | 30.00th=[ 330], 40.00th=[ 334], 50.00th=[ 338], 60.00th=[ 347], 00:35:58.492 | 70.00th=[ 367], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 00:35:58.492 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:35:58.492 | 99.99th=[41157] 00:35:58.492 bw ( KiB/s): min= 608, max= 2272, per=50.61%, avg=1362.53, stdev=550.06, samples=19 00:35:58.492 iops : min= 152, max= 568, avg=340.63, stdev=137.52, samples=19 00:35:58.492 lat (usec) : 500=71.41%, 750=0.78% 00:35:58.492 lat (msec) : 10=0.12%, 50=27.69% 00:35:58.492 cpu : usr=95.52%, sys=3.95%, ctx=27, majf=0, minf=0 00:35:58.492 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:58.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.492 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.492 issued rwts: total=3452,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:58.492 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:58.492 00:35:58.492 Run status group 0 (all jobs): 00:35:58.492 READ: bw=2691KiB/s (2756kB/s), 1313KiB/s-1381KiB/s (1344kB/s-1414kB/s), io=26.3MiB (27.6MB), run=10002-10018msec 00:35:58.492 12:34:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:58.492 12:34:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:35:58.492 12:34:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:58.492 12:34:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:58.492 12:34:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:35:58.492 12:34:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:58.492 12:34:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.492 12:34:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:58.492 12:34:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.492 12:34:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:58.492 12:34:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.492 12:34:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:58.492 12:34:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.492 12:34:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:58.492 12:34:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:58.492 12:34:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:35:58.493 12:34:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:58.493 12:34:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.493 12:34:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:58.493 12:34:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.493 12:34:53 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:58.493 12:34:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.493 12:34:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:58.493 ************************************ 00:35:58.493 END TEST fio_dif_1_multi_subsystems 00:35:58.493 ************************************ 00:35:58.493 12:34:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.493 00:35:58.493 real 0m11.016s 00:35:58.493 user 0m19.715s 00:35:58.493 sys 0m1.059s 00:35:58.493 12:34:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:58.493 12:34:53 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:58.493 12:34:53 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:58.493 12:34:53 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:58.493 12:34:53 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:58.493 12:34:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:58.493 ************************************ 00:35:58.493 START TEST fio_dif_rand_params 00:35:58.493 ************************************ 00:35:58.493 12:34:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:35:58.493 12:34:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:35:58.493 12:34:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:58.493 12:34:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:35:58.493 12:34:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:35:58.493 12:34:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:35:58.493 12:34:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:35:58.493 12:34:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:35:58.493 12:34:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:35:58.493 12:34:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:58.493 12:34:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:58.493 12:34:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:58.493 12:34:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:58.493 12:34:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:58.493 12:34:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.493 12:34:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:58.493 bdev_null0 00:35:58.493 12:34:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.493 12:34:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:58.493 12:34:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.493 12:34:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:58.493 12:34:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.493 12:34:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:58.493 12:34:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.493 12:34:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:58.493 12:34:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.493 12:34:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:35:58.493 12:34:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.493 12:34:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:58.493 [2024-11-25 12:34:53.930062] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:35:58.493 12:34:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.493 12:34:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:58.493 12:34:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:58.493 12:34:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:58.493 12:34:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:58.493 12:34:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:35:58.493 12:34:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:58.493 12:34:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:58.493 12:34:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:58.493 12:34:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:35:58.493 12:34:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:35:58.493 12:34:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:58.493 12:34:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:58.493 12:34:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:35:58.493 12:34:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:35:58.493 12:34:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:58.493 12:34:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:35:58.493 12:34:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:58.493 12:34:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:58.493 12:34:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:58.493 { 00:35:58.493 "params": { 00:35:58.493 "name": "Nvme$subsystem", 00:35:58.493 "trtype": "$TEST_TRANSPORT", 00:35:58.493 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:58.493 "adrfam": "ipv4", 00:35:58.493 "trsvcid": "$NVMF_PORT", 00:35:58.493 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:58.493 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:58.493 "hdgst": ${hdgst:-false}, 00:35:58.493 "ddgst": ${ddgst:-false} 00:35:58.493 }, 00:35:58.493 "method": "bdev_nvme_attach_controller" 00:35:58.493 } 00:35:58.493 EOF 00:35:58.493 )") 00:35:58.493 12:34:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:58.493 12:34:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:35:58.493 12:34:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:58.494 12:34:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:58.494 12:34:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:58.494 12:34:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:58.494 12:34:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:35:58.494 12:34:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:35:58.494 12:34:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:58.494 "params": { 00:35:58.494 "name": "Nvme0", 00:35:58.494 "trtype": "tcp", 00:35:58.494 "traddr": "10.0.0.3", 00:35:58.494 "adrfam": "ipv4", 00:35:58.494 "trsvcid": "4420", 00:35:58.494 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:58.494 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:58.494 "hdgst": false, 00:35:58.494 "ddgst": false 00:35:58.494 }, 00:35:58.494 "method": "bdev_nvme_attach_controller" 00:35:58.494 }' 00:35:58.494 12:34:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:58.494 12:34:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:58.494 12:34:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:35:58.494 12:34:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:35:58.494 12:34:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:35:58.494 12:34:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:35:58.494 12:34:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:35:58.494 12:34:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:35:58.494 12:34:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:35:58.494 12:34:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:58.494 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:58.494 ... 00:35:58.494 fio-3.35 00:35:58.494 Starting 3 threads 00:36:05.049 00:36:05.049 filename0: (groupid=0, jobs=1): err= 0: pid=108012: Mon Nov 25 12:34:59 2024 00:36:05.049 read: IOPS=274, BW=34.3MiB/s (36.0MB/s)(172MiB/5004msec) 00:36:05.049 slat (nsec): min=3927, max=21604, avg=7113.94, stdev=1459.33 00:36:05.049 clat (usec): min=2923, max=51260, avg=10915.50, stdev=10895.41 00:36:05.049 lat (usec): min=2929, max=51268, avg=10922.61, stdev=10895.52 00:36:05.049 clat percentiles (usec): 00:36:05.049 | 1.00th=[ 3163], 5.00th=[ 4883], 10.00th=[ 5407], 20.00th=[ 5932], 00:36:05.049 | 30.00th=[ 7504], 40.00th=[ 8160], 50.00th=[ 8586], 60.00th=[ 8848], 00:36:05.049 | 70.00th=[ 9110], 80.00th=[ 9372], 90.00th=[10159], 95.00th=[47973], 00:36:05.049 | 99.00th=[49546], 99.50th=[50070], 99.90th=[51119], 99.95th=[51119], 00:36:05.049 | 99.99th=[51119] 00:36:05.049 bw ( KiB/s): min=27648, max=46336, per=27.62%, avg=35097.60, stdev=7054.80, samples=10 00:36:05.049 iops : min= 216, max= 362, avg=274.20, stdev=55.12, samples=10 00:36:05.049 lat (msec) : 4=1.97%, 10=87.19%, 20=3.20%, 50=6.77%, 100=0.87% 00:36:05.049 cpu : usr=94.90%, sys=3.90%, ctx=5, majf=0, minf=0 00:36:05.049 IO depths : 1=4.0%, 2=96.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:05.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:05.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:05.049 issued rwts: total=1374,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:05.049 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:05.049 filename0: (groupid=0, jobs=1): err= 0: pid=108013: Mon Nov 25 12:34:59 2024 00:36:05.049 read: IOPS=359, BW=45.0MiB/s (47.1MB/s)(225MiB/5005msec) 00:36:05.049 slat (nsec): min=3114, max=42009, avg=6547.55, stdev=1629.46 00:36:05.049 clat (usec): min=3032, max=52490, avg=8326.62, stdev=4894.94 00:36:05.049 lat (usec): min=3038, max=52496, avg=8333.16, stdev=4895.03 00:36:05.049 clat percentiles (usec): 00:36:05.049 | 1.00th=[ 3064], 5.00th=[ 3097], 10.00th=[ 3130], 20.00th=[ 6128], 00:36:05.049 | 30.00th=[ 6521], 40.00th=[ 6915], 50.00th=[ 7308], 60.00th=[ 9896], 00:36:05.049 | 70.00th=[10421], 80.00th=[10945], 90.00th=[11338], 95.00th=[11600], 00:36:05.049 | 99.00th=[12649], 99.50th=[47449], 99.90th=[51119], 99.95th=[52691], 00:36:05.049 | 99.99th=[52691] 00:36:05.049 bw ( KiB/s): min=36096, max=65280, per=36.21%, avg=46003.20, stdev=8675.55, samples=10 00:36:05.049 iops : min= 282, max= 510, avg=359.40, stdev=67.78, samples=10 00:36:05.049 lat (msec) : 4=15.83%, 10=45.06%, 20=38.11%, 50=0.67%, 100=0.33% 00:36:05.049 cpu : usr=93.73%, sys=4.86%, ctx=11, majf=0, minf=0 00:36:05.049 IO depths : 1=21.1%, 2=78.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:05.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:05.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:05.049 issued rwts: total=1800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:05.049 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:05.049 filename0: (groupid=0, jobs=1): err= 0: pid=108014: Mon Nov 25 12:34:59 2024 00:36:05.049 read: IOPS=358, BW=44.8MiB/s (47.0MB/s)(224MiB/5005msec) 00:36:05.049 slat (nsec): min=4079, max=41681, avg=7324.94, stdev=1500.54 00:36:05.049 clat (usec): min=2782, max=50011, avg=8357.71, stdev=8237.85 00:36:05.049 lat (usec): min=2788, max=50021, avg=8365.04, stdev=8237.92 00:36:05.049 clat percentiles (usec): 00:36:05.049 | 1.00th=[ 2966], 5.00th=[ 4621], 10.00th=[ 5080], 20.00th=[ 5407], 00:36:05.049 | 30.00th=[ 5735], 40.00th=[ 6652], 50.00th=[ 7111], 60.00th=[ 7439], 00:36:05.049 | 70.00th=[ 7635], 80.00th=[ 7898], 90.00th=[ 8225], 95.00th=[ 8717], 00:36:05.049 | 99.00th=[48497], 99.50th=[49021], 99.90th=[49546], 99.95th=[50070], 00:36:05.049 | 99.99th=[50070] 00:36:05.049 bw ( KiB/s): min=25344, max=57600, per=36.11%, avg=45875.20, stdev=10940.52, samples=10 00:36:05.049 iops : min= 198, max= 450, avg=358.40, stdev=85.47, samples=10 00:36:05.049 lat (msec) : 4=1.90%, 10=93.92%, 50=4.12%, 100=0.06% 00:36:05.049 cpu : usr=95.46%, sys=3.50%, ctx=6, majf=0, minf=9 00:36:05.049 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:05.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:05.049 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:05.049 issued rwts: total=1794,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:05.049 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:05.049 00:36:05.049 Run status group 0 (all jobs): 00:36:05.049 READ: bw=124MiB/s (130MB/s), 34.3MiB/s-45.0MiB/s (36.0MB/s-47.1MB/s), io=621MiB (651MB), run=5004-5005msec 00:36:05.049 12:34:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:36:05.049 12:34:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:05.049 12:34:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:05.049 12:34:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:05.049 12:34:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:05.049 12:34:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:05.049 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.049 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:05.049 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.049 12:34:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:05.049 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.049 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:05.049 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.049 12:34:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:36:05.049 12:34:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:36:05.049 12:34:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:36:05.049 12:34:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:36:05.049 12:34:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:36:05.049 12:34:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:36:05.049 12:34:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:36:05.049 12:34:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:05.049 12:34:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:05.049 12:34:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:05.049 12:34:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:05.049 12:34:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:36:05.049 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.049 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:05.049 bdev_null0 00:36:05.049 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.049 12:34:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:05.050 [2024-11-25 12:34:59.827171] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:05.050 bdev_null1 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:05.050 bdev_null2 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:05.050 { 00:36:05.050 "params": { 00:36:05.050 "name": "Nvme$subsystem", 00:36:05.050 "trtype": "$TEST_TRANSPORT", 00:36:05.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:05.050 "adrfam": "ipv4", 00:36:05.050 "trsvcid": "$NVMF_PORT", 00:36:05.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:05.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:05.050 "hdgst": ${hdgst:-false}, 00:36:05.050 "ddgst": ${ddgst:-false} 00:36:05.050 }, 00:36:05.050 "method": "bdev_nvme_attach_controller" 00:36:05.050 } 00:36:05.050 EOF 00:36:05.050 )") 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:05.050 { 00:36:05.050 "params": { 00:36:05.050 "name": "Nvme$subsystem", 00:36:05.050 "trtype": "$TEST_TRANSPORT", 00:36:05.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:05.050 "adrfam": "ipv4", 00:36:05.050 "trsvcid": "$NVMF_PORT", 00:36:05.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:05.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:05.050 "hdgst": ${hdgst:-false}, 00:36:05.050 "ddgst": ${ddgst:-false} 00:36:05.050 }, 00:36:05.050 "method": "bdev_nvme_attach_controller" 00:36:05.050 } 00:36:05.050 EOF 00:36:05.050 )") 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:05.050 { 00:36:05.050 "params": { 00:36:05.050 "name": "Nvme$subsystem", 00:36:05.050 "trtype": "$TEST_TRANSPORT", 00:36:05.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:05.050 "adrfam": "ipv4", 00:36:05.050 "trsvcid": "$NVMF_PORT", 00:36:05.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:05.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:05.050 "hdgst": ${hdgst:-false}, 00:36:05.050 "ddgst": ${ddgst:-false} 00:36:05.050 }, 00:36:05.050 "method": "bdev_nvme_attach_controller" 00:36:05.050 } 00:36:05.050 EOF 00:36:05.050 )") 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:05.050 12:34:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:05.050 "params": { 00:36:05.050 "name": "Nvme0", 00:36:05.050 "trtype": "tcp", 00:36:05.051 "traddr": "10.0.0.3", 00:36:05.051 "adrfam": "ipv4", 00:36:05.051 "trsvcid": "4420", 00:36:05.051 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:05.051 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:05.051 "hdgst": false, 00:36:05.051 "ddgst": false 00:36:05.051 }, 00:36:05.051 "method": "bdev_nvme_attach_controller" 00:36:05.051 },{ 00:36:05.051 "params": { 00:36:05.051 "name": "Nvme1", 00:36:05.051 "trtype": "tcp", 00:36:05.051 "traddr": "10.0.0.3", 00:36:05.051 "adrfam": "ipv4", 00:36:05.051 "trsvcid": "4420", 00:36:05.051 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:05.051 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:05.051 "hdgst": false, 00:36:05.051 "ddgst": false 00:36:05.051 }, 00:36:05.051 "method": "bdev_nvme_attach_controller" 00:36:05.051 },{ 00:36:05.051 "params": { 00:36:05.051 "name": "Nvme2", 00:36:05.051 "trtype": "tcp", 00:36:05.051 "traddr": "10.0.0.3", 00:36:05.051 "adrfam": "ipv4", 00:36:05.051 "trsvcid": "4420", 00:36:05.051 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:36:05.051 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:36:05.051 "hdgst": false, 00:36:05.051 "ddgst": false 00:36:05.051 }, 00:36:05.051 "method": "bdev_nvme_attach_controller" 00:36:05.051 }' 00:36:05.051 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:05.051 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:05.051 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:05.051 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:05.051 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:05.051 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:05.051 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:05.051 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:36:05.051 12:34:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:05.051 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:05.051 ... 00:36:05.051 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:05.051 ... 00:36:05.051 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:05.051 ... 00:36:05.051 fio-3.35 00:36:05.051 Starting 24 threads 00:36:15.061 00:36:15.061 filename0: (groupid=0, jobs=1): err= 0: pid=108114: Mon Nov 25 12:35:10 2024 00:36:15.061 read: IOPS=224, BW=898KiB/s (919kB/s)(8980KiB/10002msec) 00:36:15.061 slat (usec): min=3, max=8027, avg=21.64, stdev=300.90 00:36:15.061 clat (msec): min=14, max=173, avg=71.13, stdev=26.74 00:36:15.061 lat (msec): min=14, max=173, avg=71.15, stdev=26.73 00:36:15.061 clat percentiles (msec): 00:36:15.061 | 1.00th=[ 18], 5.00th=[ 34], 10.00th=[ 41], 20.00th=[ 48], 00:36:15.061 | 30.00th=[ 51], 40.00th=[ 62], 50.00th=[ 72], 60.00th=[ 75], 00:36:15.061 | 70.00th=[ 83], 80.00th=[ 94], 90.00th=[ 105], 95.00th=[ 116], 00:36:15.061 | 99.00th=[ 140], 99.50th=[ 148], 99.90th=[ 174], 99.95th=[ 174], 00:36:15.061 | 99.99th=[ 174] 00:36:15.061 bw ( KiB/s): min= 512, max= 1536, per=3.72%, avg=871.58, stdev=270.93, samples=19 00:36:15.061 iops : min= 128, max= 384, avg=217.89, stdev=67.73, samples=19 00:36:15.061 lat (msec) : 20=2.00%, 50=26.86%, 100=58.57%, 250=12.56% 00:36:15.061 cpu : usr=37.67%, sys=0.38%, ctx=1059, majf=0, minf=9 00:36:15.061 IO depths : 1=2.9%, 2=6.6%, 4=16.9%, 8=63.4%, 16=10.2%, 32=0.0%, >=64=0.0% 00:36:15.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.061 complete : 0=0.0%, 4=91.9%, 8=3.0%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.061 issued rwts: total=2245,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.061 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.061 filename0: (groupid=0, jobs=1): err= 0: pid=108115: Mon Nov 25 12:35:10 2024 00:36:15.061 read: IOPS=267, BW=1072KiB/s (1097kB/s)(10.5MiB/10029msec) 00:36:15.061 slat (usec): min=5, max=8014, avg=10.65, stdev=154.53 00:36:15.061 clat (msec): min=10, max=142, avg=59.59, stdev=24.11 00:36:15.061 lat (msec): min=10, max=142, avg=59.60, stdev=24.11 00:36:15.061 clat percentiles (msec): 00:36:15.061 | 1.00th=[ 13], 5.00th=[ 24], 10.00th=[ 33], 20.00th=[ 39], 00:36:15.061 | 30.00th=[ 48], 40.00th=[ 51], 50.00th=[ 57], 60.00th=[ 62], 00:36:15.061 | 70.00th=[ 72], 80.00th=[ 80], 90.00th=[ 94], 95.00th=[ 106], 00:36:15.061 | 99.00th=[ 138], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:36:15.061 | 99.99th=[ 144] 00:36:15.061 bw ( KiB/s): min= 512, max= 1912, per=4.56%, avg=1068.00, stdev=343.16, samples=20 00:36:15.061 iops : min= 128, max= 478, avg=267.00, stdev=85.79, samples=20 00:36:15.061 lat (msec) : 20=2.75%, 50=35.58%, 100=55.08%, 250=6.59% 00:36:15.061 cpu : usr=40.28%, sys=0.36%, ctx=1253, majf=0, minf=9 00:36:15.061 IO depths : 1=1.5%, 2=3.1%, 4=11.2%, 8=72.5%, 16=11.7%, 32=0.0%, >=64=0.0% 00:36:15.061 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.061 complete : 0=0.0%, 4=90.2%, 8=4.9%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.061 issued rwts: total=2687,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.061 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.062 filename0: (groupid=0, jobs=1): err= 0: pid=108116: Mon Nov 25 12:35:10 2024 00:36:15.062 read: IOPS=223, BW=894KiB/s (916kB/s)(8944KiB/10001msec) 00:36:15.062 slat (usec): min=3, max=8035, avg=17.41, stdev=254.25 00:36:15.062 clat (usec): min=1097, max=173544, avg=71402.27, stdev=24321.58 00:36:15.062 lat (usec): min=1102, max=173550, avg=71419.68, stdev=24315.85 00:36:15.062 clat percentiles (msec): 00:36:15.062 | 1.00th=[ 3], 5.00th=[ 37], 10.00th=[ 44], 20.00th=[ 50], 00:36:15.062 | 30.00th=[ 57], 40.00th=[ 66], 50.00th=[ 71], 60.00th=[ 79], 00:36:15.062 | 70.00th=[ 83], 80.00th=[ 95], 90.00th=[ 105], 95.00th=[ 111], 00:36:15.062 | 99.00th=[ 134], 99.50th=[ 140], 99.90th=[ 174], 99.95th=[ 174], 00:36:15.062 | 99.99th=[ 174] 00:36:15.062 bw ( KiB/s): min= 640, max= 1280, per=3.62%, avg=848.53, stdev=204.39, samples=19 00:36:15.062 iops : min= 160, max= 320, avg=212.11, stdev=51.09, samples=19 00:36:15.062 lat (msec) : 2=0.72%, 4=0.72%, 50=18.87%, 100=65.52%, 250=14.18% 00:36:15.062 cpu : usr=43.02%, sys=0.28%, ctx=1208, majf=0, minf=9 00:36:15.062 IO depths : 1=3.7%, 2=8.0%, 4=19.5%, 8=59.8%, 16=8.9%, 32=0.0%, >=64=0.0% 00:36:15.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.062 complete : 0=0.0%, 4=92.4%, 8=2.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.062 issued rwts: total=2236,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.062 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.062 filename0: (groupid=0, jobs=1): err= 0: pid=108117: Mon Nov 25 12:35:10 2024 00:36:15.062 read: IOPS=214, BW=858KiB/s (878kB/s)(8588KiB/10012msec) 00:36:15.062 slat (usec): min=3, max=4014, avg= 9.79, stdev=86.59 00:36:15.062 clat (msec): min=13, max=151, avg=74.52, stdev=25.09 00:36:15.062 lat (msec): min=13, max=151, avg=74.53, stdev=25.09 00:36:15.062 clat percentiles (msec): 00:36:15.062 | 1.00th=[ 28], 5.00th=[ 35], 10.00th=[ 44], 20.00th=[ 51], 00:36:15.062 | 30.00th=[ 59], 40.00th=[ 66], 50.00th=[ 74], 60.00th=[ 82], 00:36:15.062 | 70.00th=[ 88], 80.00th=[ 97], 90.00th=[ 107], 95.00th=[ 113], 00:36:15.062 | 99.00th=[ 133], 99.50th=[ 144], 99.90th=[ 153], 99.95th=[ 153], 00:36:15.062 | 99.99th=[ 153] 00:36:15.062 bw ( KiB/s): min= 472, max= 1152, per=3.48%, avg=815.95, stdev=202.58, samples=19 00:36:15.062 iops : min= 118, max= 288, avg=203.95, stdev=50.60, samples=19 00:36:15.062 lat (msec) : 20=0.47%, 50=18.77%, 100=65.25%, 250=15.51% 00:36:15.062 cpu : usr=42.31%, sys=0.41%, ctx=1265, majf=0, minf=9 00:36:15.062 IO depths : 1=2.7%, 2=5.9%, 4=15.9%, 8=65.4%, 16=10.1%, 32=0.0%, >=64=0.0% 00:36:15.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.062 complete : 0=0.0%, 4=91.5%, 8=3.0%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.062 issued rwts: total=2147,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.062 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.062 filename0: (groupid=0, jobs=1): err= 0: pid=108118: Mon Nov 25 12:35:10 2024 00:36:15.062 read: IOPS=254, BW=1020KiB/s (1044kB/s)(9.97MiB/10014msec) 00:36:15.062 slat (usec): min=2, max=5022, avg= 9.98, stdev=104.90 00:36:15.062 clat (msec): min=21, max=130, avg=62.66, stdev=19.43 00:36:15.062 lat (msec): min=21, max=130, avg=62.67, stdev=19.43 00:36:15.062 clat percentiles (msec): 00:36:15.062 | 1.00th=[ 24], 5.00th=[ 34], 10.00th=[ 40], 20.00th=[ 48], 00:36:15.062 | 30.00th=[ 51], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 64], 00:36:15.062 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 87], 95.00th=[ 96], 00:36:15.062 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 131], 99.95th=[ 131], 00:36:15.062 | 99.99th=[ 131] 00:36:15.062 bw ( KiB/s): min= 640, max= 1328, per=4.30%, avg=1007.58, stdev=187.97, samples=19 00:36:15.062 iops : min= 160, max= 332, avg=251.89, stdev=46.99, samples=19 00:36:15.062 lat (msec) : 50=28.59%, 100=67.29%, 250=4.11% 00:36:15.062 cpu : usr=36.33%, sys=0.30%, ctx=1284, majf=0, minf=9 00:36:15.062 IO depths : 1=0.6%, 2=1.5%, 4=7.8%, 8=76.9%, 16=13.2%, 32=0.0%, >=64=0.0% 00:36:15.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.062 complete : 0=0.0%, 4=89.7%, 8=6.1%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.062 issued rwts: total=2553,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.062 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.062 filename0: (groupid=0, jobs=1): err= 0: pid=108119: Mon Nov 25 12:35:10 2024 00:36:15.062 read: IOPS=286, BW=1145KiB/s (1173kB/s)(11.2MiB/10033msec) 00:36:15.062 slat (usec): min=3, max=4017, avg=10.22, stdev=98.67 00:36:15.062 clat (msec): min=2, max=129, avg=55.78, stdev=22.34 00:36:15.062 lat (msec): min=2, max=129, avg=55.79, stdev=22.33 00:36:15.062 clat percentiles (msec): 00:36:15.062 | 1.00th=[ 7], 5.00th=[ 19], 10.00th=[ 29], 20.00th=[ 37], 00:36:15.062 | 30.00th=[ 45], 40.00th=[ 50], 50.00th=[ 55], 60.00th=[ 61], 00:36:15.062 | 70.00th=[ 67], 80.00th=[ 73], 90.00th=[ 84], 95.00th=[ 92], 00:36:15.062 | 99.00th=[ 116], 99.50th=[ 127], 99.90th=[ 130], 99.95th=[ 130], 00:36:15.062 | 99.99th=[ 130] 00:36:15.062 bw ( KiB/s): min= 768, max= 2480, per=4.88%, avg=1142.40, stdev=402.41, samples=20 00:36:15.062 iops : min= 192, max= 620, avg=285.60, stdev=100.60, samples=20 00:36:15.062 lat (msec) : 4=0.31%, 10=1.36%, 20=3.93%, 50=35.13%, 100=55.54% 00:36:15.062 lat (msec) : 250=3.73% 00:36:15.062 cpu : usr=44.19%, sys=0.43%, ctx=1407, majf=0, minf=0 00:36:15.062 IO depths : 1=0.8%, 2=1.9%, 4=8.5%, 8=75.7%, 16=13.1%, 32=0.0%, >=64=0.0% 00:36:15.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.062 complete : 0=0.0%, 4=89.8%, 8=5.9%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.062 issued rwts: total=2872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.062 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.062 filename0: (groupid=0, jobs=1): err= 0: pid=108120: Mon Nov 25 12:35:10 2024 00:36:15.062 read: IOPS=269, BW=1077KiB/s (1102kB/s)(10.5MiB/10025msec) 00:36:15.062 slat (usec): min=3, max=4013, avg=11.47, stdev=98.97 00:36:15.062 clat (msec): min=7, max=144, avg=59.37, stdev=23.10 00:36:15.062 lat (msec): min=7, max=144, avg=59.39, stdev=23.10 00:36:15.062 clat percentiles (msec): 00:36:15.062 | 1.00th=[ 10], 5.00th=[ 22], 10.00th=[ 33], 20.00th=[ 41], 00:36:15.062 | 30.00th=[ 48], 40.00th=[ 53], 50.00th=[ 58], 60.00th=[ 63], 00:36:15.062 | 70.00th=[ 70], 80.00th=[ 77], 90.00th=[ 88], 95.00th=[ 101], 00:36:15.062 | 99.00th=[ 129], 99.50th=[ 142], 99.90th=[ 144], 99.95th=[ 144], 00:36:15.062 | 99.99th=[ 144] 00:36:15.062 bw ( KiB/s): min= 696, max= 2167, per=4.58%, avg=1072.35, stdev=330.73, samples=20 00:36:15.062 iops : min= 174, max= 541, avg=268.05, stdev=82.55, samples=20 00:36:15.062 lat (msec) : 10=1.19%, 20=3.48%, 50=30.02%, 100=59.79%, 250=5.52% 00:36:15.062 cpu : usr=46.29%, sys=0.34%, ctx=1251, majf=0, minf=9 00:36:15.062 IO depths : 1=1.4%, 2=3.2%, 4=11.0%, 8=72.7%, 16=11.7%, 32=0.0%, >=64=0.0% 00:36:15.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.062 complete : 0=0.0%, 4=90.2%, 8=4.9%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.062 issued rwts: total=2698,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.062 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.062 filename0: (groupid=0, jobs=1): err= 0: pid=108121: Mon Nov 25 12:35:10 2024 00:36:15.062 read: IOPS=229, BW=919KiB/s (942kB/s)(9204KiB/10010msec) 00:36:15.062 slat (usec): min=2, max=8029, avg=18.94, stdev=269.34 00:36:15.062 clat (msec): min=9, max=179, avg=69.45, stdev=26.32 00:36:15.062 lat (msec): min=9, max=179, avg=69.47, stdev=26.33 00:36:15.062 clat percentiles (msec): 00:36:15.062 | 1.00th=[ 23], 5.00th=[ 32], 10.00th=[ 42], 20.00th=[ 48], 00:36:15.062 | 30.00th=[ 55], 40.00th=[ 60], 50.00th=[ 68], 60.00th=[ 72], 00:36:15.062 | 70.00th=[ 81], 80.00th=[ 88], 90.00th=[ 108], 95.00th=[ 115], 00:36:15.062 | 99.00th=[ 144], 99.50th=[ 157], 99.90th=[ 180], 99.95th=[ 180], 00:36:15.062 | 99.99th=[ 180] 00:36:15.062 bw ( KiB/s): min= 512, max= 1529, per=3.82%, avg=893.11, stdev=256.21, samples=19 00:36:15.062 iops : min= 128, max= 382, avg=223.26, stdev=64.02, samples=19 00:36:15.062 lat (msec) : 10=0.17%, 20=0.78%, 50=22.16%, 100=63.32%, 250=13.56% 00:36:15.062 cpu : usr=37.39%, sys=0.41%, ctx=1325, majf=0, minf=9 00:36:15.062 IO depths : 1=2.1%, 2=5.0%, 4=15.3%, 8=66.9%, 16=10.6%, 32=0.0%, >=64=0.0% 00:36:15.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.062 complete : 0=0.0%, 4=91.2%, 8=3.5%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.062 issued rwts: total=2301,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.062 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.062 filename1: (groupid=0, jobs=1): err= 0: pid=108122: Mon Nov 25 12:35:10 2024 00:36:15.062 read: IOPS=270, BW=1082KiB/s (1108kB/s)(10.6MiB/10052msec) 00:36:15.062 slat (usec): min=4, max=8016, avg=13.45, stdev=217.17 00:36:15.062 clat (msec): min=2, max=148, avg=58.99, stdev=25.76 00:36:15.062 lat (msec): min=2, max=148, avg=59.00, stdev=25.76 00:36:15.062 clat percentiles (msec): 00:36:15.062 | 1.00th=[ 5], 5.00th=[ 11], 10.00th=[ 29], 20.00th=[ 42], 00:36:15.062 | 30.00th=[ 48], 40.00th=[ 51], 50.00th=[ 59], 60.00th=[ 61], 00:36:15.062 | 70.00th=[ 71], 80.00th=[ 75], 90.00th=[ 93], 95.00th=[ 103], 00:36:15.062 | 99.00th=[ 136], 99.50th=[ 144], 99.90th=[ 150], 99.95th=[ 150], 00:36:15.062 | 99.99th=[ 150] 00:36:15.062 bw ( KiB/s): min= 600, max= 2730, per=4.61%, avg=1081.00, stdev=444.27, samples=20 00:36:15.062 iops : min= 150, max= 682, avg=270.20, stdev=111.00, samples=20 00:36:15.062 lat (msec) : 4=0.85%, 10=3.27%, 20=4.49%, 50=30.40%, 100=55.55% 00:36:15.062 lat (msec) : 250=5.44% 00:36:15.062 cpu : usr=35.16%, sys=0.28%, ctx=1338, majf=0, minf=0 00:36:15.062 IO depths : 1=1.2%, 2=2.6%, 4=10.2%, 8=73.9%, 16=12.1%, 32=0.0%, >=64=0.0% 00:36:15.062 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.062 complete : 0=0.0%, 4=90.0%, 8=5.3%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.062 issued rwts: total=2720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.062 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.062 filename1: (groupid=0, jobs=1): err= 0: pid=108123: Mon Nov 25 12:35:10 2024 00:36:15.062 read: IOPS=244, BW=980KiB/s (1003kB/s)(9804KiB/10007msec) 00:36:15.062 slat (usec): min=3, max=4013, avg=11.44, stdev=114.51 00:36:15.062 clat (msec): min=7, max=145, avg=65.26, stdev=23.39 00:36:15.062 lat (msec): min=7, max=145, avg=65.27, stdev=23.39 00:36:15.062 clat percentiles (msec): 00:36:15.062 | 1.00th=[ 26], 5.00th=[ 35], 10.00th=[ 40], 20.00th=[ 48], 00:36:15.062 | 30.00th=[ 50], 40.00th=[ 56], 50.00th=[ 61], 60.00th=[ 69], 00:36:15.062 | 70.00th=[ 73], 80.00th=[ 84], 90.00th=[ 99], 95.00th=[ 108], 00:36:15.062 | 99.00th=[ 136], 99.50th=[ 144], 99.90th=[ 146], 99.95th=[ 146], 00:36:15.062 | 99.99th=[ 146] 00:36:15.062 bw ( KiB/s): min= 640, max= 1680, per=4.12%, avg=964.63, stdev=262.86, samples=19 00:36:15.062 iops : min= 160, max= 420, avg=241.16, stdev=65.71, samples=19 00:36:15.062 lat (msec) : 10=0.65%, 20=0.08%, 50=30.35%, 100=59.65%, 250=9.26% 00:36:15.063 cpu : usr=42.99%, sys=0.35%, ctx=1155, majf=0, minf=9 00:36:15.063 IO depths : 1=1.7%, 2=3.9%, 4=12.2%, 8=70.4%, 16=11.7%, 32=0.0%, >=64=0.0% 00:36:15.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.063 complete : 0=0.0%, 4=90.7%, 8=4.7%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.063 issued rwts: total=2451,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.063 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.063 filename1: (groupid=0, jobs=1): err= 0: pid=108124: Mon Nov 25 12:35:10 2024 00:36:15.063 read: IOPS=238, BW=953KiB/s (976kB/s)(9552KiB/10018msec) 00:36:15.063 slat (usec): min=2, max=9013, avg=21.33, stdev=338.19 00:36:15.063 clat (msec): min=11, max=178, avg=66.96, stdev=26.91 00:36:15.063 lat (msec): min=11, max=178, avg=66.98, stdev=26.92 00:36:15.063 clat percentiles (msec): 00:36:15.063 | 1.00th=[ 16], 5.00th=[ 26], 10.00th=[ 33], 20.00th=[ 46], 00:36:15.063 | 30.00th=[ 51], 40.00th=[ 60], 50.00th=[ 65], 60.00th=[ 72], 00:36:15.063 | 70.00th=[ 84], 80.00th=[ 87], 90.00th=[ 105], 95.00th=[ 108], 00:36:15.063 | 99.00th=[ 140], 99.50th=[ 144], 99.90th=[ 180], 99.95th=[ 180], 00:36:15.063 | 99.99th=[ 180] 00:36:15.063 bw ( KiB/s): min= 592, max= 2016, per=4.05%, avg=948.80, stdev=327.03, samples=20 00:36:15.063 iops : min= 148, max= 504, avg=237.20, stdev=81.76, samples=20 00:36:15.063 lat (msec) : 20=2.72%, 50=26.47%, 100=60.68%, 250=10.13% 00:36:15.063 cpu : usr=33.61%, sys=0.23%, ctx=874, majf=0, minf=9 00:36:15.063 IO depths : 1=1.2%, 2=2.7%, 4=10.4%, 8=73.3%, 16=12.5%, 32=0.0%, >=64=0.0% 00:36:15.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.063 complete : 0=0.0%, 4=90.3%, 8=5.2%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.063 issued rwts: total=2388,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.063 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.063 filename1: (groupid=0, jobs=1): err= 0: pid=108125: Mon Nov 25 12:35:10 2024 00:36:15.063 read: IOPS=270, BW=1080KiB/s (1106kB/s)(10.6MiB/10034msec) 00:36:15.063 slat (usec): min=4, max=8017, avg=15.60, stdev=201.04 00:36:15.063 clat (msec): min=22, max=124, avg=59.13, stdev=20.99 00:36:15.063 lat (msec): min=22, max=124, avg=59.14, stdev=20.99 00:36:15.063 clat percentiles (msec): 00:36:15.063 | 1.00th=[ 24], 5.00th=[ 27], 10.00th=[ 34], 20.00th=[ 39], 00:36:15.063 | 30.00th=[ 48], 40.00th=[ 52], 50.00th=[ 58], 60.00th=[ 63], 00:36:15.063 | 70.00th=[ 70], 80.00th=[ 77], 90.00th=[ 85], 95.00th=[ 99], 00:36:15.063 | 99.00th=[ 114], 99.50th=[ 115], 99.90th=[ 125], 99.95th=[ 125], 00:36:15.063 | 99.99th=[ 125] 00:36:15.063 bw ( KiB/s): min= 736, max= 1728, per=4.61%, avg=1079.20, stdev=285.99, samples=20 00:36:15.063 iops : min= 184, max= 432, avg=269.80, stdev=71.50, samples=20 00:36:15.063 lat (msec) : 50=39.34%, 100=56.75%, 250=3.91% 00:36:15.063 cpu : usr=43.17%, sys=0.39%, ctx=1292, majf=0, minf=9 00:36:15.063 IO depths : 1=0.7%, 2=1.5%, 4=8.3%, 8=76.5%, 16=13.0%, 32=0.0%, >=64=0.0% 00:36:15.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.063 complete : 0=0.0%, 4=89.4%, 8=6.2%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.063 issued rwts: total=2710,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.063 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.063 filename1: (groupid=0, jobs=1): err= 0: pid=108126: Mon Nov 25 12:35:10 2024 00:36:15.063 read: IOPS=259, BW=1038KiB/s (1063kB/s)(10.2MiB/10030msec) 00:36:15.063 slat (usec): min=3, max=4019, avg=12.37, stdev=127.69 00:36:15.063 clat (msec): min=8, max=135, avg=61.55, stdev=22.80 00:36:15.063 lat (msec): min=8, max=135, avg=61.57, stdev=22.80 00:36:15.063 clat percentiles (msec): 00:36:15.063 | 1.00th=[ 13], 5.00th=[ 26], 10.00th=[ 35], 20.00th=[ 45], 00:36:15.063 | 30.00th=[ 48], 40.00th=[ 53], 50.00th=[ 60], 60.00th=[ 65], 00:36:15.063 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 94], 95.00th=[ 106], 00:36:15.063 | 99.00th=[ 124], 99.50th=[ 131], 99.90th=[ 136], 99.95th=[ 136], 00:36:15.063 | 99.99th=[ 136] 00:36:15.063 bw ( KiB/s): min= 640, max= 1968, per=4.41%, avg=1033.60, stdev=305.48, samples=20 00:36:15.063 iops : min= 160, max= 492, avg=258.35, stdev=76.32, samples=20 00:36:15.063 lat (msec) : 10=0.19%, 20=1.88%, 50=33.40%, 100=56.53%, 250=7.99% 00:36:15.063 cpu : usr=43.04%, sys=0.30%, ctx=1344, majf=0, minf=9 00:36:15.063 IO depths : 1=1.8%, 2=4.2%, 4=13.5%, 8=69.1%, 16=11.4%, 32=0.0%, >=64=0.0% 00:36:15.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.063 complete : 0=0.0%, 4=90.8%, 8=4.3%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.063 issued rwts: total=2602,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.063 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.063 filename1: (groupid=0, jobs=1): err= 0: pid=108127: Mon Nov 25 12:35:10 2024 00:36:15.063 read: IOPS=223, BW=895KiB/s (916kB/s)(8960KiB/10016msec) 00:36:15.063 slat (usec): min=3, max=4016, avg= 9.83, stdev=84.84 00:36:15.063 clat (msec): min=13, max=167, avg=71.47, stdev=25.61 00:36:15.063 lat (msec): min=13, max=168, avg=71.48, stdev=25.61 00:36:15.063 clat percentiles (msec): 00:36:15.063 | 1.00th=[ 18], 5.00th=[ 31], 10.00th=[ 40], 20.00th=[ 48], 00:36:15.063 | 30.00th=[ 58], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 77], 00:36:15.063 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 107], 95.00th=[ 109], 00:36:15.063 | 99.00th=[ 131], 99.50th=[ 142], 99.90th=[ 169], 99.95th=[ 169], 00:36:15.063 | 99.99th=[ 169] 00:36:15.063 bw ( KiB/s): min= 560, max= 1536, per=3.77%, avg=882.53, stdev=268.81, samples=19 00:36:15.063 iops : min= 140, max= 384, avg=220.63, stdev=67.20, samples=19 00:36:15.063 lat (msec) : 20=2.23%, 50=22.19%, 100=60.62%, 250=14.96% 00:36:15.063 cpu : usr=36.34%, sys=0.29%, ctx=1009, majf=0, minf=9 00:36:15.063 IO depths : 1=2.6%, 2=6.2%, 4=16.7%, 8=64.4%, 16=10.1%, 32=0.0%, >=64=0.0% 00:36:15.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.063 complete : 0=0.0%, 4=91.7%, 8=2.8%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.063 issued rwts: total=2240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.063 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.063 filename1: (groupid=0, jobs=1): err= 0: pid=108128: Mon Nov 25 12:35:10 2024 00:36:15.063 read: IOPS=269, BW=1077KiB/s (1103kB/s)(10.6MiB/10037msec) 00:36:15.063 slat (usec): min=5, max=4030, avg=12.05, stdev=125.82 00:36:15.063 clat (msec): min=7, max=180, avg=59.30, stdev=22.39 00:36:15.063 lat (msec): min=7, max=180, avg=59.32, stdev=22.38 00:36:15.063 clat percentiles (msec): 00:36:15.063 | 1.00th=[ 10], 5.00th=[ 25], 10.00th=[ 35], 20.00th=[ 46], 00:36:15.063 | 30.00th=[ 48], 40.00th=[ 53], 50.00th=[ 58], 60.00th=[ 61], 00:36:15.063 | 70.00th=[ 69], 80.00th=[ 74], 90.00th=[ 88], 95.00th=[ 100], 00:36:15.063 | 99.00th=[ 118], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 182], 00:36:15.063 | 99.99th=[ 182] 00:36:15.063 bw ( KiB/s): min= 720, max= 2167, per=4.58%, avg=1073.95, stdev=323.42, samples=20 00:36:15.063 iops : min= 180, max= 541, avg=268.45, stdev=80.72, samples=20 00:36:15.063 lat (msec) : 10=1.18%, 20=3.29%, 50=33.09%, 100=57.99%, 250=4.44% 00:36:15.063 cpu : usr=42.90%, sys=0.37%, ctx=1312, majf=0, minf=9 00:36:15.063 IO depths : 1=1.1%, 2=2.7%, 4=10.7%, 8=73.2%, 16=12.3%, 32=0.0%, >=64=0.0% 00:36:15.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.063 complete : 0=0.0%, 4=90.1%, 8=5.2%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.063 issued rwts: total=2702,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.063 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.063 filename1: (groupid=0, jobs=1): err= 0: pid=108129: Mon Nov 25 12:35:10 2024 00:36:15.063 read: IOPS=214, BW=858KiB/s (878kB/s)(8588KiB/10014msec) 00:36:15.063 slat (usec): min=3, max=8021, avg=11.78, stdev=173.02 00:36:15.063 clat (msec): min=19, max=158, avg=74.52, stdev=23.30 00:36:15.063 lat (msec): min=19, max=158, avg=74.54, stdev=23.30 00:36:15.063 clat percentiles (msec): 00:36:15.063 | 1.00th=[ 29], 5.00th=[ 39], 10.00th=[ 48], 20.00th=[ 53], 00:36:15.063 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 82], 00:36:15.063 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 117], 00:36:15.063 | 99.00th=[ 132], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:36:15.063 | 99.99th=[ 159] 00:36:15.063 bw ( KiB/s): min= 616, max= 1152, per=3.64%, avg=852.40, stdev=187.52, samples=20 00:36:15.063 iops : min= 154, max= 288, avg=213.10, stdev=46.88, samples=20 00:36:15.063 lat (msec) : 20=0.47%, 50=18.63%, 100=68.00%, 250=12.90% 00:36:15.063 cpu : usr=32.39%, sys=0.28%, ctx=837, majf=0, minf=9 00:36:15.063 IO depths : 1=2.4%, 2=5.0%, 4=14.3%, 8=67.8%, 16=10.5%, 32=0.0%, >=64=0.0% 00:36:15.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.063 complete : 0=0.0%, 4=91.0%, 8=3.7%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.063 issued rwts: total=2147,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.063 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.063 filename2: (groupid=0, jobs=1): err= 0: pid=108130: Mon Nov 25 12:35:10 2024 00:36:15.063 read: IOPS=216, BW=865KiB/s (886kB/s)(8664KiB/10011msec) 00:36:15.063 slat (usec): min=2, max=7688, avg=11.90, stdev=165.10 00:36:15.063 clat (msec): min=15, max=155, avg=73.86, stdev=24.89 00:36:15.063 lat (msec): min=15, max=155, avg=73.87, stdev=24.89 00:36:15.063 clat percentiles (msec): 00:36:15.063 | 1.00th=[ 24], 5.00th=[ 35], 10.00th=[ 44], 20.00th=[ 53], 00:36:15.063 | 30.00th=[ 61], 40.00th=[ 65], 50.00th=[ 72], 60.00th=[ 83], 00:36:15.063 | 70.00th=[ 88], 80.00th=[ 96], 90.00th=[ 107], 95.00th=[ 114], 00:36:15.063 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 157], 00:36:15.063 | 99.99th=[ 157] 00:36:15.063 bw ( KiB/s): min= 504, max= 1504, per=3.61%, avg=845.89, stdev=241.45, samples=19 00:36:15.063 iops : min= 126, max= 376, avg=211.47, stdev=60.36, samples=19 00:36:15.063 lat (msec) : 20=0.28%, 50=18.65%, 100=65.24%, 250=15.84% 00:36:15.063 cpu : usr=35.62%, sys=0.22%, ctx=1265, majf=0, minf=9 00:36:15.063 IO depths : 1=0.2%, 2=0.6%, 4=4.2%, 8=78.7%, 16=16.3%, 32=0.0%, >=64=0.0% 00:36:15.063 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.063 complete : 0=0.0%, 4=89.5%, 8=8.6%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.063 issued rwts: total=2166,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.063 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.063 filename2: (groupid=0, jobs=1): err= 0: pid=108131: Mon Nov 25 12:35:10 2024 00:36:15.063 read: IOPS=240, BW=960KiB/s (983kB/s)(9612KiB/10009msec) 00:36:15.063 slat (usec): min=3, max=8013, avg=11.60, stdev=163.39 00:36:15.063 clat (msec): min=11, max=159, avg=66.58, stdev=23.41 00:36:15.063 lat (msec): min=11, max=159, avg=66.59, stdev=23.41 00:36:15.063 clat percentiles (msec): 00:36:15.063 | 1.00th=[ 24], 5.00th=[ 32], 10.00th=[ 37], 20.00th=[ 48], 00:36:15.063 | 30.00th=[ 52], 40.00th=[ 61], 50.00th=[ 66], 60.00th=[ 72], 00:36:15.064 | 70.00th=[ 75], 80.00th=[ 85], 90.00th=[ 99], 95.00th=[ 108], 00:36:15.064 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 161], 99.95th=[ 161], 00:36:15.064 | 99.99th=[ 161] 00:36:15.064 bw ( KiB/s): min= 640, max= 1640, per=4.05%, avg=948.63, stdev=258.13, samples=19 00:36:15.064 iops : min= 160, max= 410, avg=237.16, stdev=64.53, samples=19 00:36:15.064 lat (msec) : 20=0.54%, 50=28.05%, 100=62.17%, 250=9.24% 00:36:15.064 cpu : usr=39.26%, sys=0.36%, ctx=1111, majf=0, minf=9 00:36:15.064 IO depths : 1=1.6%, 2=3.6%, 4=12.6%, 8=70.7%, 16=11.5%, 32=0.0%, >=64=0.0% 00:36:15.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.064 complete : 0=0.0%, 4=90.6%, 8=4.4%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.064 issued rwts: total=2403,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.064 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.064 filename2: (groupid=0, jobs=1): err= 0: pid=108132: Mon Nov 25 12:35:10 2024 00:36:15.064 read: IOPS=276, BW=1104KiB/s (1131kB/s)(10.8MiB/10026msec) 00:36:15.064 slat (usec): min=3, max=8015, avg=13.75, stdev=186.47 00:36:15.064 clat (msec): min=2, max=143, avg=57.81, stdev=27.82 00:36:15.064 lat (msec): min=2, max=143, avg=57.82, stdev=27.82 00:36:15.064 clat percentiles (msec): 00:36:15.064 | 1.00th=[ 4], 5.00th=[ 11], 10.00th=[ 23], 20.00th=[ 36], 00:36:15.064 | 30.00th=[ 43], 40.00th=[ 48], 50.00th=[ 57], 60.00th=[ 62], 00:36:15.064 | 70.00th=[ 72], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 108], 00:36:15.064 | 99.00th=[ 118], 99.50th=[ 122], 99.90th=[ 131], 99.95th=[ 131], 00:36:15.064 | 99.99th=[ 144] 00:36:15.064 bw ( KiB/s): min= 656, max= 3328, per=4.70%, avg=1100.80, stdev=591.32, samples=20 00:36:15.064 iops : min= 164, max= 832, avg=275.20, stdev=147.83, samples=20 00:36:15.064 lat (msec) : 4=1.59%, 10=3.22%, 20=4.80%, 50=34.10%, 100=49.28% 00:36:15.064 lat (msec) : 250=7.01% 00:36:15.064 cpu : usr=38.30%, sys=0.34%, ctx=1020, majf=0, minf=0 00:36:15.064 IO depths : 1=1.7%, 2=3.6%, 4=11.4%, 8=71.6%, 16=11.7%, 32=0.0%, >=64=0.0% 00:36:15.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.064 complete : 0=0.0%, 4=90.4%, 8=4.8%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.064 issued rwts: total=2768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.064 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.064 filename2: (groupid=0, jobs=1): err= 0: pid=108133: Mon Nov 25 12:35:10 2024 00:36:15.064 read: IOPS=224, BW=899KiB/s (921kB/s)(9012KiB/10022msec) 00:36:15.064 slat (usec): min=2, max=8014, avg=11.83, stdev=168.76 00:36:15.064 clat (msec): min=23, max=182, avg=71.09, stdev=25.04 00:36:15.064 lat (msec): min=23, max=182, avg=71.11, stdev=25.05 00:36:15.064 clat percentiles (msec): 00:36:15.064 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 46], 20.00th=[ 50], 00:36:15.064 | 30.00th=[ 57], 40.00th=[ 63], 50.00th=[ 70], 60.00th=[ 73], 00:36:15.064 | 70.00th=[ 81], 80.00th=[ 89], 90.00th=[ 103], 95.00th=[ 122], 00:36:15.064 | 99.00th=[ 144], 99.50th=[ 146], 99.90th=[ 184], 99.95th=[ 184], 00:36:15.064 | 99.99th=[ 184] 00:36:15.064 bw ( KiB/s): min= 600, max= 1461, per=3.82%, avg=894.65, stdev=237.29, samples=20 00:36:15.064 iops : min= 150, max= 365, avg=223.65, stdev=59.29, samples=20 00:36:15.064 lat (msec) : 50=20.68%, 100=69.06%, 250=10.25% 00:36:15.064 cpu : usr=36.76%, sys=0.43%, ctx=1146, majf=0, minf=9 00:36:15.064 IO depths : 1=1.8%, 2=4.3%, 4=13.3%, 8=69.1%, 16=11.5%, 32=0.0%, >=64=0.0% 00:36:15.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.064 complete : 0=0.0%, 4=91.0%, 8=4.1%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.064 issued rwts: total=2253,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.064 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.064 filename2: (groupid=0, jobs=1): err= 0: pid=108134: Mon Nov 25 12:35:10 2024 00:36:15.064 read: IOPS=246, BW=988KiB/s (1011kB/s)(9900KiB/10024msec) 00:36:15.064 slat (usec): min=3, max=8014, avg=12.86, stdev=180.54 00:36:15.064 clat (msec): min=14, max=131, avg=64.66, stdev=20.10 00:36:15.064 lat (msec): min=14, max=131, avg=64.67, stdev=20.09 00:36:15.064 clat percentiles (msec): 00:36:15.064 | 1.00th=[ 16], 5.00th=[ 35], 10.00th=[ 41], 20.00th=[ 48], 00:36:15.064 | 30.00th=[ 54], 40.00th=[ 58], 50.00th=[ 64], 60.00th=[ 70], 00:36:15.064 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 96], 95.00th=[ 100], 00:36:15.064 | 99.00th=[ 120], 99.50th=[ 125], 99.90th=[ 132], 99.95th=[ 132], 00:36:15.064 | 99.99th=[ 132] 00:36:15.064 bw ( KiB/s): min= 672, max= 1360, per=4.21%, avg=985.85, stdev=179.23, samples=20 00:36:15.064 iops : min= 168, max= 340, avg=246.45, stdev=44.79, samples=20 00:36:15.064 lat (msec) : 20=1.29%, 50=24.04%, 100=69.86%, 250=4.81% 00:36:15.064 cpu : usr=38.80%, sys=0.30%, ctx=1091, majf=0, minf=9 00:36:15.064 IO depths : 1=1.1%, 2=2.6%, 4=10.5%, 8=73.3%, 16=12.5%, 32=0.0%, >=64=0.0% 00:36:15.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.064 complete : 0=0.0%, 4=90.1%, 8=5.5%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.064 issued rwts: total=2475,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.064 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.064 filename2: (groupid=0, jobs=1): err= 0: pid=108135: Mon Nov 25 12:35:10 2024 00:36:15.064 read: IOPS=268, BW=1075KiB/s (1100kB/s)(10.6MiB/10057msec) 00:36:15.064 slat (usec): min=2, max=4015, avg= 8.98, stdev=77.24 00:36:15.064 clat (msec): min=8, max=167, avg=59.41, stdev=25.76 00:36:15.064 lat (msec): min=8, max=167, avg=59.42, stdev=25.76 00:36:15.064 clat percentiles (msec): 00:36:15.064 | 1.00th=[ 10], 5.00th=[ 16], 10.00th=[ 32], 20.00th=[ 39], 00:36:15.064 | 30.00th=[ 48], 40.00th=[ 50], 50.00th=[ 58], 60.00th=[ 61], 00:36:15.064 | 70.00th=[ 72], 80.00th=[ 83], 90.00th=[ 96], 95.00th=[ 108], 00:36:15.064 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 169], 99.95th=[ 169], 00:36:15.064 | 99.99th=[ 169] 00:36:15.064 bw ( KiB/s): min= 640, max= 2608, per=4.59%, avg=1074.40, stdev=417.59, samples=20 00:36:15.064 iops : min= 160, max= 652, avg=268.60, stdev=104.40, samples=20 00:36:15.064 lat (msec) : 10=2.70%, 20=3.15%, 50=35.83%, 100=52.15%, 250=6.18% 00:36:15.064 cpu : usr=33.81%, sys=0.22%, ctx=876, majf=0, minf=9 00:36:15.064 IO depths : 1=0.7%, 2=1.4%, 4=8.0%, 8=76.6%, 16=13.2%, 32=0.0%, >=64=0.0% 00:36:15.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.064 complete : 0=0.0%, 4=89.6%, 8=6.2%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.064 issued rwts: total=2702,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.064 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.064 filename2: (groupid=0, jobs=1): err= 0: pid=108136: Mon Nov 25 12:35:10 2024 00:36:15.064 read: IOPS=218, BW=874KiB/s (895kB/s)(8752KiB/10012msec) 00:36:15.064 slat (nsec): min=3692, max=49366, avg=8812.78, stdev=5048.72 00:36:15.064 clat (msec): min=21, max=166, avg=73.16, stdev=26.95 00:36:15.064 lat (msec): min=21, max=166, avg=73.17, stdev=26.95 00:36:15.064 clat percentiles (msec): 00:36:15.064 | 1.00th=[ 24], 5.00th=[ 32], 10.00th=[ 36], 20.00th=[ 48], 00:36:15.064 | 30.00th=[ 56], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 84], 00:36:15.064 | 70.00th=[ 88], 80.00th=[ 96], 90.00th=[ 109], 95.00th=[ 115], 00:36:15.064 | 99.00th=[ 131], 99.50th=[ 132], 99.90th=[ 167], 99.95th=[ 167], 00:36:15.064 | 99.99th=[ 167] 00:36:15.064 bw ( KiB/s): min= 560, max= 1560, per=3.71%, avg=868.80, stdev=268.99, samples=20 00:36:15.064 iops : min= 140, max= 390, avg=217.20, stdev=67.25, samples=20 00:36:15.064 lat (msec) : 50=25.50%, 100=56.63%, 250=17.87% 00:36:15.064 cpu : usr=34.91%, sys=0.25%, ctx=1226, majf=0, minf=9 00:36:15.064 IO depths : 1=0.5%, 2=1.7%, 4=9.0%, 8=75.2%, 16=13.5%, 32=0.0%, >=64=0.0% 00:36:15.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.064 complete : 0=0.0%, 4=90.0%, 8=5.8%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.064 issued rwts: total=2188,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.064 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.064 filename2: (groupid=0, jobs=1): err= 0: pid=108137: Mon Nov 25 12:35:10 2024 00:36:15.064 read: IOPS=218, BW=874KiB/s (895kB/s)(8756KiB/10017msec) 00:36:15.064 slat (usec): min=3, max=8020, avg=13.58, stdev=191.54 00:36:15.064 clat (msec): min=18, max=163, avg=73.06, stdev=23.36 00:36:15.064 lat (msec): min=18, max=163, avg=73.07, stdev=23.36 00:36:15.064 clat percentiles (msec): 00:36:15.064 | 1.00th=[ 31], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 53], 00:36:15.064 | 30.00th=[ 57], 40.00th=[ 64], 50.00th=[ 71], 60.00th=[ 77], 00:36:15.064 | 70.00th=[ 83], 80.00th=[ 94], 90.00th=[ 106], 95.00th=[ 116], 00:36:15.064 | 99.00th=[ 140], 99.50th=[ 144], 99.90th=[ 165], 99.95th=[ 165], 00:36:15.064 | 99.99th=[ 165] 00:36:15.064 bw ( KiB/s): min= 512, max= 1152, per=3.71%, avg=868.80, stdev=197.95, samples=20 00:36:15.064 iops : min= 128, max= 288, avg=217.20, stdev=49.49, samples=20 00:36:15.064 lat (msec) : 20=0.27%, 50=16.90%, 100=69.53%, 250=13.29% 00:36:15.064 cpu : usr=45.07%, sys=0.38%, ctx=1255, majf=0, minf=9 00:36:15.064 IO depths : 1=2.0%, 2=5.3%, 4=16.4%, 8=65.3%, 16=11.0%, 32=0.0%, >=64=0.0% 00:36:15.064 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.064 complete : 0=0.0%, 4=91.7%, 8=3.0%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:15.064 issued rwts: total=2189,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:15.064 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:15.064 00:36:15.064 Run status group 0 (all jobs): 00:36:15.064 READ: bw=22.9MiB/s (24.0MB/s), 858KiB/s-1145KiB/s (878kB/s-1173kB/s), io=230MiB (241MB), run=10001-10057msec 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:15.322 bdev_null0 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.322 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:15.323 [2024-11-25 12:35:11.128611] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:15.323 bdev_null1 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:15.323 { 00:36:15.323 "params": { 00:36:15.323 "name": "Nvme$subsystem", 00:36:15.323 "trtype": "$TEST_TRANSPORT", 00:36:15.323 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:15.323 "adrfam": "ipv4", 00:36:15.323 "trsvcid": "$NVMF_PORT", 00:36:15.323 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:15.323 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:15.323 "hdgst": ${hdgst:-false}, 00:36:15.323 "ddgst": ${ddgst:-false} 00:36:15.323 }, 00:36:15.323 "method": "bdev_nvme_attach_controller" 00:36:15.323 } 00:36:15.323 EOF 00:36:15.323 )") 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:15.323 { 00:36:15.323 "params": { 00:36:15.323 "name": "Nvme$subsystem", 00:36:15.323 "trtype": "$TEST_TRANSPORT", 00:36:15.323 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:15.323 "adrfam": "ipv4", 00:36:15.323 "trsvcid": "$NVMF_PORT", 00:36:15.323 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:15.323 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:15.323 "hdgst": ${hdgst:-false}, 00:36:15.323 "ddgst": ${ddgst:-false} 00:36:15.323 }, 00:36:15.323 "method": "bdev_nvme_attach_controller" 00:36:15.323 } 00:36:15.323 EOF 00:36:15.323 )") 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:15.323 "params": { 00:36:15.323 "name": "Nvme0", 00:36:15.323 "trtype": "tcp", 00:36:15.323 "traddr": "10.0.0.3", 00:36:15.323 "adrfam": "ipv4", 00:36:15.323 "trsvcid": "4420", 00:36:15.323 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:15.323 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:15.323 "hdgst": false, 00:36:15.323 "ddgst": false 00:36:15.323 }, 00:36:15.323 "method": "bdev_nvme_attach_controller" 00:36:15.323 },{ 00:36:15.323 "params": { 00:36:15.323 "name": "Nvme1", 00:36:15.323 "trtype": "tcp", 00:36:15.323 "traddr": "10.0.0.3", 00:36:15.323 "adrfam": "ipv4", 00:36:15.323 "trsvcid": "4420", 00:36:15.323 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:15.323 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:15.323 "hdgst": false, 00:36:15.323 "ddgst": false 00:36:15.323 }, 00:36:15.323 "method": "bdev_nvme_attach_controller" 00:36:15.323 }' 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:15.323 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:15.580 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:15.580 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:15.580 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:36:15.580 12:35:11 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:15.580 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:15.580 ... 00:36:15.580 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:15.580 ... 00:36:15.580 fio-3.35 00:36:15.580 Starting 4 threads 00:36:22.131 00:36:22.131 filename0: (groupid=0, jobs=1): err= 0: pid=108274: Mon Nov 25 12:35:16 2024 00:36:22.131 read: IOPS=2338, BW=18.3MiB/s (19.2MB/s)(91.4MiB/5003msec) 00:36:22.131 slat (nsec): min=5387, max=53869, avg=7243.55, stdev=5610.87 00:36:22.131 clat (usec): min=773, max=4903, avg=3388.12, stdev=441.83 00:36:22.131 lat (usec): min=779, max=4925, avg=3395.36, stdev=440.67 00:36:22.131 clat percentiles (usec): 00:36:22.131 | 1.00th=[ 2704], 5.00th=[ 2835], 10.00th=[ 2900], 20.00th=[ 2933], 00:36:22.131 | 30.00th=[ 2966], 40.00th=[ 3032], 50.00th=[ 3687], 60.00th=[ 3687], 00:36:22.131 | 70.00th=[ 3720], 80.00th=[ 3752], 90.00th=[ 3785], 95.00th=[ 3818], 00:36:22.131 | 99.00th=[ 3982], 99.50th=[ 4146], 99.90th=[ 4490], 99.95th=[ 4883], 00:36:22.131 | 99.99th=[ 4883] 00:36:22.131 bw ( KiB/s): min=16896, max=22192, per=24.67%, avg=18423.11, stdev=2080.17, samples=9 00:36:22.131 iops : min= 2112, max= 2774, avg=2302.89, stdev=260.02, samples=9 00:36:22.131 lat (usec) : 1000=0.34% 00:36:22.131 lat (msec) : 2=0.16%, 4=98.56%, 10=0.93% 00:36:22.131 cpu : usr=95.88%, sys=3.16%, ctx=12, majf=0, minf=0 00:36:22.131 IO depths : 1=10.3%, 2=24.4%, 4=50.6%, 8=14.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:22.131 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.131 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.131 issued rwts: total=11699,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:22.131 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:22.131 filename0: (groupid=0, jobs=1): err= 0: pid=108275: Mon Nov 25 12:35:16 2024 00:36:22.131 read: IOPS=2332, BW=18.2MiB/s (19.1MB/s)(91.1MiB/5001msec) 00:36:22.131 slat (nsec): min=3061, max=59846, avg=9176.08, stdev=5787.96 00:36:22.131 clat (usec): min=1049, max=5142, avg=3388.62, stdev=427.44 00:36:22.131 lat (usec): min=1055, max=5150, avg=3397.79, stdev=427.73 00:36:22.131 clat percentiles (usec): 00:36:22.131 | 1.00th=[ 2769], 5.00th=[ 2835], 10.00th=[ 2868], 20.00th=[ 2933], 00:36:22.131 | 30.00th=[ 2933], 40.00th=[ 3032], 50.00th=[ 3654], 60.00th=[ 3687], 00:36:22.131 | 70.00th=[ 3720], 80.00th=[ 3752], 90.00th=[ 3785], 95.00th=[ 3818], 00:36:22.131 | 99.00th=[ 4293], 99.50th=[ 4359], 99.90th=[ 4686], 99.95th=[ 4883], 00:36:22.131 | 99.99th=[ 5145] 00:36:22.131 bw ( KiB/s): min=16768, max=21588, per=24.58%, avg=18351.89, stdev=1929.35, samples=9 00:36:22.131 iops : min= 2096, max= 2698, avg=2293.89, stdev=240.99, samples=9 00:36:22.131 lat (msec) : 2=0.12%, 4=97.90%, 10=1.98% 00:36:22.131 cpu : usr=96.18%, sys=2.90%, ctx=7, majf=0, minf=0 00:36:22.131 IO depths : 1=9.4%, 2=20.1%, 4=54.9%, 8=15.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:22.131 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.131 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.131 issued rwts: total=11667,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:22.131 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:22.131 filename1: (groupid=0, jobs=1): err= 0: pid=108276: Mon Nov 25 12:35:16 2024 00:36:22.131 read: IOPS=2332, BW=18.2MiB/s (19.1MB/s)(91.1MiB/5001msec) 00:36:22.131 slat (nsec): min=2929, max=55425, avg=10907.73, stdev=8533.56 00:36:22.131 clat (usec): min=1593, max=5771, avg=3373.79, stdev=426.16 00:36:22.131 lat (usec): min=1599, max=5777, avg=3384.70, stdev=425.23 00:36:22.131 clat percentiles (usec): 00:36:22.131 | 1.00th=[ 2704], 5.00th=[ 2802], 10.00th=[ 2868], 20.00th=[ 2900], 00:36:22.131 | 30.00th=[ 2933], 40.00th=[ 3032], 50.00th=[ 3654], 60.00th=[ 3687], 00:36:22.131 | 70.00th=[ 3720], 80.00th=[ 3720], 90.00th=[ 3785], 95.00th=[ 3818], 00:36:22.131 | 99.00th=[ 4113], 99.50th=[ 4555], 99.90th=[ 4752], 99.95th=[ 5604], 00:36:22.131 | 99.99th=[ 5735] 00:36:22.131 bw ( KiB/s): min=16768, max=21632, per=24.58%, avg=18356.00, stdev=1945.83, samples=9 00:36:22.131 iops : min= 2096, max= 2704, avg=2294.44, stdev=243.13, samples=9 00:36:22.131 lat (msec) : 2=0.08%, 4=98.83%, 10=1.10% 00:36:22.131 cpu : usr=95.70%, sys=3.34%, ctx=4, majf=0, minf=0 00:36:22.131 IO depths : 1=11.4%, 2=25.0%, 4=50.0%, 8=13.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:22.131 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.131 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.131 issued rwts: total=11664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:22.131 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:22.131 filename1: (groupid=0, jobs=1): err= 0: pid=108277: Mon Nov 25 12:35:16 2024 00:36:22.131 read: IOPS=2332, BW=18.2MiB/s (19.1MB/s)(91.1MiB/5001msec) 00:36:22.131 slat (nsec): min=3786, max=55622, avg=10551.79, stdev=8050.53 00:36:22.131 clat (usec): min=1547, max=5371, avg=3383.59, stdev=424.21 00:36:22.131 lat (usec): min=1553, max=5377, avg=3394.14, stdev=423.59 00:36:22.131 clat percentiles (usec): 00:36:22.131 | 1.00th=[ 2737], 5.00th=[ 2835], 10.00th=[ 2868], 20.00th=[ 2900], 00:36:22.131 | 30.00th=[ 2966], 40.00th=[ 3032], 50.00th=[ 3654], 60.00th=[ 3687], 00:36:22.131 | 70.00th=[ 3720], 80.00th=[ 3752], 90.00th=[ 3785], 95.00th=[ 3818], 00:36:22.131 | 99.00th=[ 4146], 99.50th=[ 4555], 99.90th=[ 4686], 99.95th=[ 4752], 00:36:22.131 | 99.99th=[ 5014] 00:36:22.132 bw ( KiB/s): min=16768, max=21632, per=24.58%, avg=18351.44, stdev=1946.84, samples=9 00:36:22.132 iops : min= 2096, max= 2704, avg=2293.89, stdev=243.28, samples=9 00:36:22.132 lat (msec) : 2=0.07%, 4=98.74%, 10=1.19% 00:36:22.132 cpu : usr=95.80%, sys=3.28%, ctx=4, majf=0, minf=0 00:36:22.132 IO depths : 1=11.5%, 2=25.0%, 4=50.0%, 8=13.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:22.132 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.132 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:22.132 issued rwts: total=11664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:22.132 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:22.132 00:36:22.132 Run status group 0 (all jobs): 00:36:22.132 READ: bw=72.9MiB/s (76.5MB/s), 18.2MiB/s-18.3MiB/s (19.1MB/s-19.2MB/s), io=365MiB (383MB), run=5001-5003msec 00:36:22.132 12:35:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:36:22.132 12:35:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:22.132 12:35:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:22.132 12:35:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:22.132 12:35:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:22.132 12:35:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:22.132 12:35:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.132 12:35:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:22.132 12:35:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.132 12:35:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:22.132 12:35:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.132 12:35:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:22.132 12:35:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.132 12:35:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:22.132 12:35:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:22.132 12:35:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:22.132 12:35:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:22.132 12:35:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.132 12:35:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:22.132 12:35:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.132 12:35:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:22.132 12:35:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.132 12:35:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:22.132 ************************************ 00:36:22.132 END TEST fio_dif_rand_params 00:36:22.132 ************************************ 00:36:22.132 12:35:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.132 00:36:22.132 real 0m23.201s 00:36:22.132 user 2m9.779s 00:36:22.132 sys 0m2.634s 00:36:22.132 12:35:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:22.132 12:35:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:22.132 12:35:17 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:36:22.132 12:35:17 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:22.132 12:35:17 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:22.132 12:35:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:22.132 ************************************ 00:36:22.132 START TEST fio_dif_digest 00:36:22.132 ************************************ 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:22.132 bdev_null0 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:22.132 [2024-11-25 12:35:17.168540] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:22.132 { 00:36:22.132 "params": { 00:36:22.132 "name": "Nvme$subsystem", 00:36:22.132 "trtype": "$TEST_TRANSPORT", 00:36:22.132 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:22.132 "adrfam": "ipv4", 00:36:22.132 "trsvcid": "$NVMF_PORT", 00:36:22.132 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:22.132 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:22.132 "hdgst": ${hdgst:-false}, 00:36:22.132 "ddgst": ${ddgst:-false} 00:36:22.132 }, 00:36:22.132 "method": "bdev_nvme_attach_controller" 00:36:22.132 } 00:36:22.132 EOF 00:36:22.132 )") 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:22.132 "params": { 00:36:22.132 "name": "Nvme0", 00:36:22.132 "trtype": "tcp", 00:36:22.132 "traddr": "10.0.0.3", 00:36:22.132 "adrfam": "ipv4", 00:36:22.132 "trsvcid": "4420", 00:36:22.132 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:22.132 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:22.132 "hdgst": true, 00:36:22.132 "ddgst": true 00:36:22.132 }, 00:36:22.132 "method": "bdev_nvme_attach_controller" 00:36:22.132 }' 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:36:22.132 12:35:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:22.133 12:35:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:22.133 12:35:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:22.133 12:35:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:22.133 12:35:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:36:22.133 12:35:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:22.133 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:22.133 ... 00:36:22.133 fio-3.35 00:36:22.133 Starting 3 threads 00:36:32.095 00:36:32.095 filename0: (groupid=0, jobs=1): err= 0: pid=108383: Mon Nov 25 12:35:27 2024 00:36:32.095 read: IOPS=307, BW=38.5MiB/s (40.4MB/s)(385MiB/10003msec) 00:36:32.095 slat (nsec): min=3841, max=22318, avg=6792.34, stdev=1057.28 00:36:32.095 clat (usec): min=4961, max=49477, avg=9733.65, stdev=2120.57 00:36:32.095 lat (usec): min=4967, max=49484, avg=9740.44, stdev=2120.60 00:36:32.095 clat percentiles (usec): 00:36:32.095 | 1.00th=[ 5604], 5.00th=[ 6390], 10.00th=[ 8455], 20.00th=[ 9110], 00:36:32.095 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10028], 00:36:32.095 | 70.00th=[10290], 80.00th=[10552], 90.00th=[10945], 95.00th=[11207], 00:36:32.095 | 99.00th=[11731], 99.50th=[11994], 99.90th=[49021], 99.95th=[49021], 00:36:32.095 | 99.99th=[49546] 00:36:32.095 bw ( KiB/s): min=36608, max=43264, per=35.77%, avg=39356.63, stdev=1723.32, samples=19 00:36:32.095 iops : min= 286, max= 338, avg=307.47, stdev=13.46, samples=19 00:36:32.095 lat (msec) : 10=55.13%, 20=44.68%, 50=0.19% 00:36:32.095 cpu : usr=94.00%, sys=4.77%, ctx=22, majf=0, minf=0 00:36:32.095 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:32.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.095 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.095 issued rwts: total=3080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.095 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:32.095 filename0: (groupid=0, jobs=1): err= 0: pid=108384: Mon Nov 25 12:35:27 2024 00:36:32.095 read: IOPS=243, BW=30.4MiB/s (31.9MB/s)(304MiB/10002msec) 00:36:32.095 slat (nsec): min=2990, max=21575, avg=6554.81, stdev=799.73 00:36:32.095 clat (usec): min=3244, max=15452, avg=12330.00, stdev=1866.24 00:36:32.095 lat (usec): min=3250, max=15459, avg=12336.56, stdev=1866.22 00:36:32.095 clat percentiles (usec): 00:36:32.095 | 1.00th=[ 3621], 5.00th=[ 7963], 10.00th=[ 9634], 20.00th=[11863], 00:36:32.095 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12780], 60.00th=[13042], 00:36:32.095 | 70.00th=[13173], 80.00th=[13566], 90.00th=[13829], 95.00th=[14091], 00:36:32.095 | 99.00th=[14746], 99.50th=[14877], 99.90th=[15139], 99.95th=[15401], 00:36:32.095 | 99.99th=[15401] 00:36:32.095 bw ( KiB/s): min=29184, max=34816, per=28.28%, avg=31110.74, stdev=1574.32, samples=19 00:36:32.095 iops : min= 228, max= 272, avg=243.05, stdev=12.30, samples=19 00:36:32.095 lat (msec) : 4=1.19%, 10=9.38%, 20=89.43% 00:36:32.095 cpu : usr=94.04%, sys=4.85%, ctx=5, majf=0, minf=0 00:36:32.095 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:32.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.095 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.095 issued rwts: total=2432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.095 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:32.095 filename0: (groupid=0, jobs=1): err= 0: pid=108385: Mon Nov 25 12:35:27 2024 00:36:32.095 read: IOPS=308, BW=38.6MiB/s (40.5MB/s)(386MiB/10007msec) 00:36:32.095 slat (nsec): min=5656, max=33298, avg=6534.66, stdev=882.69 00:36:32.095 clat (usec): min=5499, max=51218, avg=9709.14, stdev=4885.44 00:36:32.095 lat (usec): min=5505, max=51224, avg=9715.68, stdev=4885.42 00:36:32.095 clat percentiles (usec): 00:36:32.095 | 1.00th=[ 7898], 5.00th=[ 8356], 10.00th=[ 8586], 20.00th=[ 8848], 00:36:32.095 | 30.00th=[ 8979], 40.00th=[ 9110], 50.00th=[ 9110], 60.00th=[ 9241], 00:36:32.095 | 70.00th=[ 9372], 80.00th=[ 9503], 90.00th=[ 9634], 95.00th=[ 9896], 00:36:32.095 | 99.00th=[49546], 99.50th=[50070], 99.90th=[51119], 99.95th=[51119], 00:36:32.095 | 99.99th=[51119] 00:36:32.095 bw ( KiB/s): min=34560, max=42752, per=35.97%, avg=39572.21, stdev=2458.90, samples=19 00:36:32.095 iops : min= 270, max= 334, avg=309.16, stdev=19.21, samples=19 00:36:32.095 lat (msec) : 10=96.57%, 20=1.97%, 50=0.94%, 100=0.52% 00:36:32.095 cpu : usr=93.50%, sys=5.29%, ctx=10, majf=0, minf=0 00:36:32.095 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:32.095 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.095 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:32.095 issued rwts: total=3089,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:32.095 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:32.095 00:36:32.095 Run status group 0 (all jobs): 00:36:32.095 READ: bw=107MiB/s (113MB/s), 30.4MiB/s-38.6MiB/s (31.9MB/s-40.5MB/s), io=1075MiB (1127MB), run=10002-10007msec 00:36:32.354 12:35:27 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:32.354 12:35:27 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:36:32.354 12:35:27 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:36:32.354 12:35:27 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:32.354 12:35:27 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:36:32.354 12:35:27 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:32.354 12:35:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.354 12:35:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:32.354 12:35:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.354 12:35:27 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:32.354 12:35:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.354 12:35:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:32.354 ************************************ 00:36:32.354 END TEST fio_dif_digest 00:36:32.354 ************************************ 00:36:32.354 12:35:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.354 00:36:32.354 real 0m10.865s 00:36:32.354 user 0m28.717s 00:36:32.354 sys 0m1.668s 00:36:32.354 12:35:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:32.354 12:35:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:32.354 12:35:28 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:32.354 12:35:28 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:36:32.354 12:35:28 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:32.354 12:35:28 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:36:32.354 12:35:28 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:32.354 12:35:28 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:36:32.354 12:35:28 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:32.354 12:35:28 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:32.354 rmmod nvme_tcp 00:36:32.354 rmmod nvme_fabrics 00:36:32.354 rmmod nvme_keyring 00:36:32.354 12:35:28 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:32.354 12:35:28 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:36:32.354 12:35:28 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:36:32.354 12:35:28 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 107611 ']' 00:36:32.354 12:35:28 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 107611 00:36:32.354 12:35:28 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 107611 ']' 00:36:32.354 12:35:28 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 107611 00:36:32.354 12:35:28 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:36:32.354 12:35:28 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:32.354 12:35:28 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107611 00:36:32.354 killing process with pid 107611 00:36:32.354 12:35:28 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:32.354 12:35:28 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:32.354 12:35:28 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107611' 00:36:32.354 12:35:28 nvmf_dif -- common/autotest_common.sh@973 -- # kill 107611 00:36:32.354 12:35:28 nvmf_dif -- common/autotest_common.sh@978 -- # wait 107611 00:36:32.611 12:35:28 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:36:32.611 12:35:28 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:36:32.869 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:32.869 Waiting for block devices as requested 00:36:32.869 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:36:32.869 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:36:32.869 12:35:28 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:32.869 12:35:28 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:32.869 12:35:28 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:36:32.869 12:35:28 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:36:32.869 12:35:28 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:32.869 12:35:28 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:36:32.869 12:35:28 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:32.869 12:35:28 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:36:32.869 12:35:28 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:36:32.869 12:35:28 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:36:32.869 12:35:28 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:36:33.126 12:35:28 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:36:33.126 12:35:28 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:36:33.126 12:35:28 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:36:33.126 12:35:28 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:36:33.126 12:35:28 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:36:33.126 12:35:28 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:36:33.126 12:35:28 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:36:33.126 12:35:28 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:36:33.126 12:35:28 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:33.126 12:35:28 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:33.126 12:35:28 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:36:33.126 12:35:28 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:33.126 12:35:28 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:33.126 12:35:28 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:33.126 12:35:28 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:36:33.126 00:36:33.126 real 0m58.713s 00:36:33.126 user 3m53.969s 00:36:33.126 sys 0m11.017s 00:36:33.126 12:35:28 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:33.126 ************************************ 00:36:33.126 END TEST nvmf_dif 00:36:33.126 ************************************ 00:36:33.126 12:35:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:33.126 12:35:28 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:33.126 12:35:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:33.126 12:35:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:33.126 12:35:28 -- common/autotest_common.sh@10 -- # set +x 00:36:33.126 ************************************ 00:36:33.126 START TEST nvmf_abort_qd_sizes 00:36:33.126 ************************************ 00:36:33.126 12:35:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:33.385 * Looking for test storage... 00:36:33.385 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:36:33.385 12:35:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:33.385 12:35:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:36:33.385 12:35:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:33.385 12:35:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:33.385 12:35:29 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:33.385 12:35:29 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:33.385 12:35:29 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:33.385 12:35:29 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:36:33.385 12:35:29 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:36:33.385 12:35:29 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:36:33.385 12:35:29 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:36:33.385 12:35:29 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:36:33.385 12:35:29 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:36:33.385 12:35:29 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:36:33.385 12:35:29 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:33.385 12:35:29 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:36:33.385 12:35:29 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:36:33.385 12:35:29 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:33.385 12:35:29 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:33.385 12:35:29 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:36:33.385 12:35:29 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:36:33.385 12:35:29 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:33.385 12:35:29 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:36:33.385 12:35:29 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:36:33.385 12:35:29 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:36:33.385 12:35:29 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:36:33.385 12:35:29 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:33.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:33.386 --rc genhtml_branch_coverage=1 00:36:33.386 --rc genhtml_function_coverage=1 00:36:33.386 --rc genhtml_legend=1 00:36:33.386 --rc geninfo_all_blocks=1 00:36:33.386 --rc geninfo_unexecuted_blocks=1 00:36:33.386 00:36:33.386 ' 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:33.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:33.386 --rc genhtml_branch_coverage=1 00:36:33.386 --rc genhtml_function_coverage=1 00:36:33.386 --rc genhtml_legend=1 00:36:33.386 --rc geninfo_all_blocks=1 00:36:33.386 --rc geninfo_unexecuted_blocks=1 00:36:33.386 00:36:33.386 ' 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:33.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:33.386 --rc genhtml_branch_coverage=1 00:36:33.386 --rc genhtml_function_coverage=1 00:36:33.386 --rc genhtml_legend=1 00:36:33.386 --rc geninfo_all_blocks=1 00:36:33.386 --rc geninfo_unexecuted_blocks=1 00:36:33.386 00:36:33.386 ' 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:33.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:33.386 --rc genhtml_branch_coverage=1 00:36:33.386 --rc genhtml_function_coverage=1 00:36:33.386 --rc genhtml_legend=1 00:36:33.386 --rc geninfo_all_blocks=1 00:36:33.386 --rc geninfo_unexecuted_blocks=1 00:36:33.386 00:36:33.386 ' 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:33.386 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:36:33.386 Cannot find device "nvmf_init_br" 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:36:33.386 Cannot find device "nvmf_init_br2" 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:36:33.386 Cannot find device "nvmf_tgt_br" 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:36:33.386 Cannot find device "nvmf_tgt_br2" 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:36:33.386 Cannot find device "nvmf_init_br" 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:36:33.386 Cannot find device "nvmf_init_br2" 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:36:33.386 Cannot find device "nvmf_tgt_br" 00:36:33.386 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:36:33.387 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:36:33.387 Cannot find device "nvmf_tgt_br2" 00:36:33.387 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:36:33.387 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:36:33.387 Cannot find device "nvmf_br" 00:36:33.387 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:36:33.387 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:36:33.387 Cannot find device "nvmf_init_if" 00:36:33.387 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:36:33.387 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:36:33.387 Cannot find device "nvmf_init_if2" 00:36:33.387 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:36:33.387 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:33.387 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:33.387 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:36:33.387 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:33.387 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:36:33.387 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:36:33.387 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:36:33.387 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:36:33.387 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:36:33.387 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:36:33.387 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:36:33.387 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:36:33.644 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:36:33.644 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:36:33.644 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:36:33.644 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:36:33.644 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:36:33.644 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:36:33.644 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:36:33.644 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:36:33.644 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:36:33.644 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:36:33.644 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:36:33.644 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:36:33.644 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:36:33.644 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:36:33.644 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:36:33.644 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:36:33.644 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:36:33.644 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:36:33.644 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:36:33.644 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:36:33.644 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:36:33.644 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:36:33.644 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:36:33.644 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:36:33.644 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:36:33.644 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:36:33.644 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:36:33.644 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:36:33.644 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:36:33.644 00:36:33.644 --- 10.0.0.3 ping statistics --- 00:36:33.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:33.644 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:36:33.644 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:36:33.644 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:36:33.644 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.072 ms 00:36:33.644 00:36:33.644 --- 10.0.0.4 ping statistics --- 00:36:33.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:33.644 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:36:33.644 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:36:33.644 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:33.644 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:36:33.644 00:36:33.644 --- 10.0.0.1 ping statistics --- 00:36:33.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:33.644 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:36:33.644 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:36:33.644 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:33.644 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.031 ms 00:36:33.644 00:36:33.644 --- 10.0.0.2 ping statistics --- 00:36:33.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:33.644 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:36:33.644 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:33.644 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:36:33.644 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:36:33.644 12:35:29 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:36:34.209 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:34.209 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:36:34.209 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:36:34.209 12:35:30 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:34.209 12:35:30 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:34.209 12:35:30 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:34.209 12:35:30 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:34.209 12:35:30 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:34.209 12:35:30 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:34.467 12:35:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:36:34.467 12:35:30 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:34.467 12:35:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:34.467 12:35:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:34.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:34.467 12:35:30 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=109025 00:36:34.467 12:35:30 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 109025 00:36:34.467 12:35:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 109025 ']' 00:36:34.467 12:35:30 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:34.467 12:35:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:34.467 12:35:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:34.467 12:35:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:34.467 12:35:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:34.467 12:35:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:34.467 [2024-11-25 12:35:30.122170] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:36:34.467 [2024-11-25 12:35:30.122317] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:34.467 [2024-11-25 12:35:30.266040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:34.467 [2024-11-25 12:35:30.320669] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:34.467 [2024-11-25 12:35:30.320886] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:34.467 [2024-11-25 12:35:30.320994] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:34.467 [2024-11-25 12:35:30.321090] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:34.467 [2024-11-25 12:35:30.321120] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:34.467 [2024-11-25 12:35:30.322196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:34.467 [2024-11-25 12:35:30.322277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:34.467 [2024-11-25 12:35:30.322368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:34.467 [2024-11-25 12:35:30.322367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:35.400 12:35:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:35.400 12:35:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:36:35.400 12:35:30 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:35.400 12:35:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:35.400 12:35:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:35.400 12:35:31 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:35.400 12:35:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:35.400 12:35:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:35.400 12:35:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:35.400 12:35:31 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:36:35.400 12:35:31 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:36:35.400 12:35:31 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:36:35.400 12:35:31 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:36:35.400 12:35:31 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:36:35.400 12:35:31 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:36:35.400 12:35:31 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:36:35.400 12:35:31 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:36:35.400 12:35:31 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:36:35.400 12:35:31 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:36:35.400 12:35:31 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:36:35.400 12:35:31 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:36:35.400 12:35:31 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:36:35.400 12:35:31 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:36:35.400 12:35:31 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:36:35.400 12:35:31 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:36:35.400 12:35:31 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:36:35.400 12:35:31 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:36:35.400 12:35:31 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:36:35.400 12:35:31 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:36:35.400 12:35:31 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:36:35.400 12:35:31 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:36:35.400 12:35:31 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:36:35.400 12:35:31 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:36:35.400 12:35:31 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:36:35.400 12:35:31 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:36:35.400 12:35:31 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:36:35.400 12:35:31 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:36:35.400 12:35:31 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:36:35.400 12:35:31 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:36:35.400 12:35:31 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:36:35.400 12:35:31 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:36:35.400 12:35:31 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:36:35.400 12:35:31 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:36:35.400 12:35:31 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:36:35.400 12:35:31 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:36:35.400 12:35:31 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:36:35.400 12:35:31 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:36:35.400 12:35:31 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:36:35.400 12:35:31 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:36:35.401 12:35:31 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:36:35.401 12:35:31 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:36:35.401 12:35:31 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:36:35.401 12:35:31 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:36:35.401 12:35:31 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:36:35.401 12:35:31 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:36:35.401 12:35:31 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:36:35.401 12:35:31 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:36:35.401 12:35:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:36:35.401 12:35:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:36:35.401 12:35:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:35.401 12:35:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:35.401 12:35:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:35.401 12:35:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:35.401 ************************************ 00:36:35.401 START TEST spdk_target_abort 00:36:35.401 ************************************ 00:36:35.401 12:35:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:36:35.401 12:35:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:35.401 12:35:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:36:35.401 12:35:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.401 12:35:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:35.401 spdk_targetn1 00:36:35.401 12:35:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.401 12:35:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:35.401 12:35:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.401 12:35:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:35.401 [2024-11-25 12:35:31.137685] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:35.401 12:35:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.401 12:35:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:35.401 12:35:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.401 12:35:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:35.401 12:35:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.401 12:35:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:35.401 12:35:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.401 12:35:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:35.401 12:35:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.401 12:35:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:36:35.401 12:35:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:35.401 12:35:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:35.401 [2024-11-25 12:35:31.165905] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:36:35.401 12:35:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.401 12:35:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:36:35.401 12:35:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:35.401 12:35:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:35.401 12:35:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:36:35.401 12:35:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:35.401 12:35:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:35.401 12:35:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:35.401 12:35:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:35.401 12:35:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:35.401 12:35:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:35.401 12:35:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:35.401 12:35:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:35.401 12:35:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:35.401 12:35:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:35.401 12:35:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:36:35.401 12:35:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:35.401 12:35:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:36:35.401 12:35:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:35.401 12:35:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:35.401 12:35:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:35.401 12:35:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:38.689 Initializing NVMe Controllers 00:36:38.689 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:36:38.689 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:38.689 Initialization complete. Launching workers. 00:36:38.689 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 14634, failed: 0 00:36:38.689 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1017, failed to submit 13617 00:36:38.689 success 804, unsuccessful 213, failed 0 00:36:38.689 12:35:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:38.689 12:35:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:41.969 Initializing NVMe Controllers 00:36:41.969 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:36:41.969 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:41.969 Initialization complete. Launching workers. 00:36:41.969 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 6007, failed: 0 00:36:41.969 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1230, failed to submit 4777 00:36:41.969 success 262, unsuccessful 968, failed 0 00:36:41.969 12:35:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:41.969 12:35:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:45.271 Initializing NVMe Controllers 00:36:45.271 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:36:45.271 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:45.271 Initialization complete. Launching workers. 00:36:45.271 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33223, failed: 0 00:36:45.271 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2707, failed to submit 30516 00:36:45.271 success 549, unsuccessful 2158, failed 0 00:36:45.271 12:35:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:36:45.271 12:35:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.271 12:35:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:45.271 12:35:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.271 12:35:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:45.271 12:35:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:45.271 12:35:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:45.542 12:35:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:45.542 12:35:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 109025 00:36:45.542 12:35:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 109025 ']' 00:36:45.542 12:35:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 109025 00:36:45.542 12:35:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:36:45.542 12:35:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:45.542 12:35:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109025 00:36:45.542 killing process with pid 109025 00:36:45.542 12:35:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:45.542 12:35:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:45.542 12:35:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109025' 00:36:45.542 12:35:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 109025 00:36:45.542 12:35:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 109025 00:36:45.800 ************************************ 00:36:45.800 END TEST spdk_target_abort 00:36:45.800 ************************************ 00:36:45.800 00:36:45.800 real 0m10.398s 00:36:45.800 user 0m42.822s 00:36:45.800 sys 0m1.333s 00:36:45.800 12:35:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:45.800 12:35:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:45.800 12:35:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:36:45.800 12:35:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:45.800 12:35:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:45.800 12:35:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:45.800 ************************************ 00:36:45.800 START TEST kernel_target_abort 00:36:45.800 ************************************ 00:36:45.800 12:35:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:36:45.800 12:35:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:36:45.800 12:35:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:36:45.800 12:35:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:45.800 12:35:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:45.800 12:35:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:45.800 12:35:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:45.800 12:35:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:45.800 12:35:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:45.800 12:35:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:45.800 12:35:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:45.800 12:35:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:45.800 12:35:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:45.800 12:35:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:45.800 12:35:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:36:45.800 12:35:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:45.800 12:35:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:45.800 12:35:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:45.800 12:35:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:36:45.800 12:35:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:36:45.800 12:35:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:36:45.800 12:35:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:45.800 12:35:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:36:46.057 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:46.057 Waiting for block devices as requested 00:36:46.057 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:36:46.057 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:36:46.315 12:35:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:36:46.316 12:35:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:46.316 12:35:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:36:46.316 12:35:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:36:46.316 12:35:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:46.316 12:35:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:36:46.316 12:35:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:36:46.316 12:35:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:36:46.316 12:35:41 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:36:46.316 No valid GPT data, bailing 00:36:46.316 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:46.316 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:36:46.316 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:36:46.316 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:36:46.316 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:36:46.316 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:36:46.316 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:36:46.316 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:36:46.316 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:36:46.316 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:36:46.316 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:36:46.316 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:36:46.316 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:36:46.316 No valid GPT data, bailing 00:36:46.316 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:36:46.316 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:36:46.316 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:36:46.316 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:36:46.316 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:36:46.316 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:36:46.316 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:36:46.316 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:36:46.316 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:36:46.316 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:36:46.316 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:36:46.316 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:36:46.316 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:36:46.316 No valid GPT data, bailing 00:36:46.316 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:36:46.316 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:36:46.316 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:36:46.316 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:36:46.316 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:36:46.316 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:36:46.316 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:36:46.316 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:36:46.316 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:36:46.316 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:36:46.316 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:36:46.316 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:36:46.316 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:36:46.316 No valid GPT data, bailing 00:36:46.575 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:36:46.575 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:36:46.575 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:36:46.575 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:36:46.575 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:36:46.575 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:46.575 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:46.575 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:46.575 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:46.575 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:36:46.575 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:36:46.575 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:36:46.575 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:36:46.575 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:36:46.575 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:36:46.575 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:36:46.575 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:46.575 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d --hostid=5934b511-67a7-4c73-84f0-52fdf36bb79d -a 10.0.0.1 -t tcp -s 4420 00:36:46.575 00:36:46.575 Discovery Log Number of Records 2, Generation counter 2 00:36:46.575 =====Discovery Log Entry 0====== 00:36:46.575 trtype: tcp 00:36:46.575 adrfam: ipv4 00:36:46.575 subtype: current discovery subsystem 00:36:46.575 treq: not specified, sq flow control disable supported 00:36:46.575 portid: 1 00:36:46.575 trsvcid: 4420 00:36:46.575 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:46.575 traddr: 10.0.0.1 00:36:46.575 eflags: none 00:36:46.575 sectype: none 00:36:46.575 =====Discovery Log Entry 1====== 00:36:46.575 trtype: tcp 00:36:46.575 adrfam: ipv4 00:36:46.575 subtype: nvme subsystem 00:36:46.575 treq: not specified, sq flow control disable supported 00:36:46.575 portid: 1 00:36:46.575 trsvcid: 4420 00:36:46.575 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:46.575 traddr: 10.0.0.1 00:36:46.575 eflags: none 00:36:46.575 sectype: none 00:36:46.575 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:36:46.575 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:46.575 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:46.575 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:46.575 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:46.575 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:46.575 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:46.575 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:46.575 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:46.575 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:46.575 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:46.575 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:46.575 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:46.575 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:46.575 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:46.575 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:46.575 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:46.575 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:46.575 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:46.575 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:46.575 12:35:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:49.892 Initializing NVMe Controllers 00:36:49.892 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:49.892 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:49.892 Initialization complete. Launching workers. 00:36:49.892 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 58408, failed: 0 00:36:49.892 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 58408, failed to submit 0 00:36:49.892 success 0, unsuccessful 58408, failed 0 00:36:49.892 12:35:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:49.892 12:35:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:53.186 Initializing NVMe Controllers 00:36:53.186 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:53.186 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:53.186 Initialization complete. Launching workers. 00:36:53.186 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 78949, failed: 0 00:36:53.187 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32534, failed to submit 46415 00:36:53.187 success 0, unsuccessful 32534, failed 0 00:36:53.187 12:35:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:53.187 12:35:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:56.474 Initializing NVMe Controllers 00:36:56.474 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:56.474 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:56.475 Initialization complete. Launching workers. 00:36:56.475 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 87563, failed: 0 00:36:56.475 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21878, failed to submit 65685 00:36:56.475 success 0, unsuccessful 21878, failed 0 00:36:56.475 12:35:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:36:56.475 12:35:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:56.475 12:35:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:36:56.475 12:35:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:56.475 12:35:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:56.475 12:35:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:56.475 12:35:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:56.475 12:35:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:36:56.475 12:35:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:36:56.475 12:35:51 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:36:56.475 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:57.408 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:36:57.665 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:36:57.665 ************************************ 00:36:57.665 END TEST kernel_target_abort 00:36:57.665 ************************************ 00:36:57.665 00:36:57.665 real 0m11.840s 00:36:57.665 user 0m6.559s 00:36:57.665 sys 0m2.663s 00:36:57.665 12:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:57.665 12:35:53 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:57.665 12:35:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:36:57.665 12:35:53 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:36:57.665 12:35:53 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:57.665 12:35:53 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:36:57.665 12:35:53 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:57.665 12:35:53 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:36:57.665 12:35:53 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:57.665 12:35:53 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:57.665 rmmod nvme_tcp 00:36:57.665 rmmod nvme_fabrics 00:36:57.665 rmmod nvme_keyring 00:36:57.665 12:35:53 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:57.665 12:35:53 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:36:57.666 12:35:53 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:36:57.666 12:35:53 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 109025 ']' 00:36:57.666 12:35:53 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 109025 00:36:57.666 Process with pid 109025 is not found 00:36:57.666 12:35:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 109025 ']' 00:36:57.666 12:35:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 109025 00:36:57.666 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (109025) - No such process 00:36:57.666 12:35:53 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 109025 is not found' 00:36:57.666 12:35:53 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:36:57.666 12:35:53 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:36:57.926 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:36:57.926 Waiting for block devices as requested 00:36:57.926 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:36:58.186 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:36:58.186 12:35:53 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:58.186 12:35:53 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:58.186 12:35:53 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:36:58.186 12:35:53 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:36:58.186 12:35:53 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:58.186 12:35:53 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:36:58.186 12:35:53 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:58.186 12:35:53 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:36:58.186 12:35:53 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:36:58.186 12:35:53 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:36:58.186 12:35:53 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:36:58.186 12:35:53 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:36:58.186 12:35:53 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:36:58.186 12:35:53 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:36:58.186 12:35:53 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:36:58.186 12:35:53 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:36:58.186 12:35:53 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:36:58.186 12:35:54 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:36:58.186 12:35:54 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:36:58.448 12:35:54 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:36:58.448 12:35:54 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:36:58.448 12:35:54 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:36:58.448 12:35:54 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:58.448 12:35:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:58.448 12:35:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:58.448 12:35:54 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:36:58.448 00:36:58.448 real 0m25.183s 00:36:58.448 user 0m50.368s 00:36:58.448 sys 0m5.133s 00:36:58.448 12:35:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:58.448 12:35:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:58.448 ************************************ 00:36:58.448 END TEST nvmf_abort_qd_sizes 00:36:58.448 ************************************ 00:36:58.448 12:35:54 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:36:58.448 12:35:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:58.448 12:35:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:58.448 12:35:54 -- common/autotest_common.sh@10 -- # set +x 00:36:58.448 ************************************ 00:36:58.448 START TEST keyring_file 00:36:58.448 ************************************ 00:36:58.448 12:35:54 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:36:58.448 * Looking for test storage... 00:36:58.448 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:36:58.448 12:35:54 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:58.448 12:35:54 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:36:58.448 12:35:54 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:58.448 12:35:54 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:58.448 12:35:54 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:58.448 12:35:54 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:58.448 12:35:54 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:58.448 12:35:54 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:36:58.448 12:35:54 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:36:58.448 12:35:54 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:36:58.448 12:35:54 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:36:58.448 12:35:54 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:36:58.448 12:35:54 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:36:58.448 12:35:54 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:36:58.448 12:35:54 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:58.448 12:35:54 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:36:58.448 12:35:54 keyring_file -- scripts/common.sh@345 -- # : 1 00:36:58.448 12:35:54 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:58.448 12:35:54 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:58.448 12:35:54 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:36:58.448 12:35:54 keyring_file -- scripts/common.sh@353 -- # local d=1 00:36:58.448 12:35:54 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:58.448 12:35:54 keyring_file -- scripts/common.sh@355 -- # echo 1 00:36:58.448 12:35:54 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:36:58.448 12:35:54 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:36:58.448 12:35:54 keyring_file -- scripts/common.sh@353 -- # local d=2 00:36:58.448 12:35:54 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:58.448 12:35:54 keyring_file -- scripts/common.sh@355 -- # echo 2 00:36:58.448 12:35:54 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:36:58.448 12:35:54 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:58.448 12:35:54 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:58.448 12:35:54 keyring_file -- scripts/common.sh@368 -- # return 0 00:36:58.448 12:35:54 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:58.448 12:35:54 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:58.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:58.448 --rc genhtml_branch_coverage=1 00:36:58.448 --rc genhtml_function_coverage=1 00:36:58.448 --rc genhtml_legend=1 00:36:58.448 --rc geninfo_all_blocks=1 00:36:58.448 --rc geninfo_unexecuted_blocks=1 00:36:58.448 00:36:58.448 ' 00:36:58.448 12:35:54 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:58.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:58.448 --rc genhtml_branch_coverage=1 00:36:58.448 --rc genhtml_function_coverage=1 00:36:58.448 --rc genhtml_legend=1 00:36:58.448 --rc geninfo_all_blocks=1 00:36:58.448 --rc geninfo_unexecuted_blocks=1 00:36:58.448 00:36:58.448 ' 00:36:58.448 12:35:54 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:58.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:58.448 --rc genhtml_branch_coverage=1 00:36:58.448 --rc genhtml_function_coverage=1 00:36:58.448 --rc genhtml_legend=1 00:36:58.448 --rc geninfo_all_blocks=1 00:36:58.448 --rc geninfo_unexecuted_blocks=1 00:36:58.448 00:36:58.448 ' 00:36:58.448 12:35:54 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:58.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:58.448 --rc genhtml_branch_coverage=1 00:36:58.448 --rc genhtml_function_coverage=1 00:36:58.448 --rc genhtml_legend=1 00:36:58.448 --rc geninfo_all_blocks=1 00:36:58.448 --rc geninfo_unexecuted_blocks=1 00:36:58.448 00:36:58.448 ' 00:36:58.448 12:35:54 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:36:58.448 12:35:54 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:36:58.448 12:35:54 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:36:58.448 12:35:54 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:58.448 12:35:54 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:58.448 12:35:54 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:58.448 12:35:54 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:58.448 12:35:54 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:58.448 12:35:54 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:58.448 12:35:54 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:58.448 12:35:54 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:58.448 12:35:54 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:58.709 12:35:54 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:58.709 12:35:54 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:36:58.709 12:35:54 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:36:58.709 12:35:54 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:58.709 12:35:54 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:58.709 12:35:54 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:36:58.709 12:35:54 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:58.709 12:35:54 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:36:58.709 12:35:54 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:36:58.709 12:35:54 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:58.709 12:35:54 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:58.709 12:35:54 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:58.709 12:35:54 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:58.709 12:35:54 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:58.709 12:35:54 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:58.709 12:35:54 keyring_file -- paths/export.sh@5 -- # export PATH 00:36:58.709 12:35:54 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:58.709 12:35:54 keyring_file -- nvmf/common.sh@51 -- # : 0 00:36:58.709 12:35:54 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:58.709 12:35:54 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:58.709 12:35:54 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:58.709 12:35:54 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:58.709 12:35:54 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:58.709 12:35:54 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:58.709 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:58.709 12:35:54 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:58.709 12:35:54 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:58.709 12:35:54 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:58.709 12:35:54 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:58.709 12:35:54 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:58.709 12:35:54 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:58.709 12:35:54 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:36:58.709 12:35:54 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:36:58.709 12:35:54 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:36:58.709 12:35:54 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:58.709 12:35:54 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:58.709 12:35:54 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:58.709 12:35:54 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:58.710 12:35:54 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:58.710 12:35:54 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:58.710 12:35:54 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.cSvGfbUCRK 00:36:58.710 12:35:54 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:58.710 12:35:54 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:58.710 12:35:54 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:58.710 12:35:54 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:58.710 12:35:54 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:58.710 12:35:54 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:58.710 12:35:54 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:58.710 12:35:54 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.cSvGfbUCRK 00:36:58.710 12:35:54 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.cSvGfbUCRK 00:36:58.710 12:35:54 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.cSvGfbUCRK 00:36:58.710 12:35:54 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:36:58.710 12:35:54 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:58.710 12:35:54 keyring_file -- keyring/common.sh@17 -- # name=key1 00:36:58.710 12:35:54 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:58.710 12:35:54 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:58.710 12:35:54 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:58.710 12:35:54 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.CHR5zhv1lk 00:36:58.710 12:35:54 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:58.710 12:35:54 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:58.710 12:35:54 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:58.710 12:35:54 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:58.710 12:35:54 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:36:58.710 12:35:54 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:58.710 12:35:54 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:58.710 12:35:54 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.CHR5zhv1lk 00:36:58.710 12:35:54 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.CHR5zhv1lk 00:36:58.710 12:35:54 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.CHR5zhv1lk 00:36:58.710 12:35:54 keyring_file -- keyring/file.sh@30 -- # tgtpid=109954 00:36:58.710 12:35:54 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:36:58.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:58.710 12:35:54 keyring_file -- keyring/file.sh@32 -- # waitforlisten 109954 00:36:58.710 12:35:54 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 109954 ']' 00:36:58.710 12:35:54 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:58.710 12:35:54 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:58.710 12:35:54 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:58.710 12:35:54 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:58.710 12:35:54 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:58.710 [2024-11-25 12:35:54.471394] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:36:58.710 [2024-11-25 12:35:54.471554] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109954 ] 00:36:58.971 [2024-11-25 12:35:54.608956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:58.971 [2024-11-25 12:35:54.652892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:59.540 12:35:55 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:59.540 12:35:55 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:36:59.540 12:35:55 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:36:59.540 12:35:55 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.540 12:35:55 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:59.540 [2024-11-25 12:35:55.303463] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:59.540 null0 00:36:59.540 [2024-11-25 12:35:55.335428] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:59.540 [2024-11-25 12:35:55.335589] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:59.540 12:35:55 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.540 12:35:55 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:59.540 12:35:55 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:59.540 12:35:55 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:59.540 12:35:55 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:59.540 12:35:55 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:59.540 12:35:55 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:59.540 12:35:55 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:59.540 12:35:55 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:59.540 12:35:55 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.540 12:35:55 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:59.540 [2024-11-25 12:35:55.363424] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:36:59.540 2024/11/25 12:35:55 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:36:59.540 request: 00:36:59.540 { 00:36:59.540 "method": "nvmf_subsystem_add_listener", 00:36:59.540 "params": { 00:36:59.540 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:36:59.540 "secure_channel": false, 00:36:59.540 "listen_address": { 00:36:59.540 "trtype": "tcp", 00:36:59.540 "traddr": "127.0.0.1", 00:36:59.540 "trsvcid": "4420" 00:36:59.540 } 00:36:59.540 } 00:36:59.540 } 00:36:59.540 Got JSON-RPC error response 00:36:59.540 GoRPCClient: error on JSON-RPC call 00:36:59.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:59.540 12:35:55 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:59.540 12:35:55 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:59.540 12:35:55 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:59.540 12:35:55 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:59.540 12:35:55 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:59.540 12:35:55 keyring_file -- keyring/file.sh@47 -- # bperfpid=109985 00:36:59.540 12:35:55 keyring_file -- keyring/file.sh@49 -- # waitforlisten 109985 /var/tmp/bperf.sock 00:36:59.540 12:35:55 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 109985 ']' 00:36:59.540 12:35:55 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:59.540 12:35:55 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:59.540 12:35:55 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:59.540 12:35:55 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:59.540 12:35:55 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:59.540 12:35:55 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:36:59.540 [2024-11-25 12:35:55.407337] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:36:59.540 [2024-11-25 12:35:55.407390] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109985 ] 00:36:59.800 [2024-11-25 12:35:55.544438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:59.800 [2024-11-25 12:35:55.580279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:00.371 12:35:56 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:00.371 12:35:56 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:37:00.371 12:35:56 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.cSvGfbUCRK 00:37:00.371 12:35:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.cSvGfbUCRK 00:37:00.632 12:35:56 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.CHR5zhv1lk 00:37:00.632 12:35:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.CHR5zhv1lk 00:37:00.892 12:35:56 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:37:00.892 12:35:56 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:37:00.892 12:35:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:00.892 12:35:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:00.892 12:35:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:01.152 12:35:56 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.cSvGfbUCRK == \/\t\m\p\/\t\m\p\.\c\S\v\G\f\b\U\C\R\K ]] 00:37:01.152 12:35:56 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:37:01.152 12:35:56 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:37:01.152 12:35:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:01.152 12:35:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:01.152 12:35:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:01.152 12:35:57 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.CHR5zhv1lk == \/\t\m\p\/\t\m\p\.\C\H\R\5\z\h\v\1\l\k ]] 00:37:01.152 12:35:57 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:37:01.152 12:35:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:01.152 12:35:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:01.152 12:35:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:01.152 12:35:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:01.152 12:35:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:01.463 12:35:57 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:37:01.463 12:35:57 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:37:01.463 12:35:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:01.463 12:35:57 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:01.463 12:35:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:01.463 12:35:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:01.463 12:35:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:01.763 12:35:57 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:37:01.763 12:35:57 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:01.764 12:35:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:01.764 [2024-11-25 12:35:57.597721] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:02.024 nvme0n1 00:37:02.024 12:35:57 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:37:02.024 12:35:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:02.024 12:35:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:02.024 12:35:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:02.024 12:35:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:02.024 12:35:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:02.285 12:35:57 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:37:02.285 12:35:57 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:37:02.285 12:35:57 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:02.285 12:35:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:02.285 12:35:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:02.285 12:35:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:02.285 12:35:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:02.285 12:35:58 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:37:02.285 12:35:58 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:02.544 Running I/O for 1 seconds... 00:37:03.480 17029.00 IOPS, 66.52 MiB/s 00:37:03.480 Latency(us) 00:37:03.480 [2024-11-25T12:35:59.356Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:03.480 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:37:03.480 nvme0n1 : 1.01 17071.66 66.69 0.00 0.00 7481.89 3049.94 17644.31 00:37:03.480 [2024-11-25T12:35:59.356Z] =================================================================================================================== 00:37:03.480 [2024-11-25T12:35:59.356Z] Total : 17071.66 66.69 0.00 0.00 7481.89 3049.94 17644.31 00:37:03.480 { 00:37:03.480 "results": [ 00:37:03.480 { 00:37:03.480 "job": "nvme0n1", 00:37:03.480 "core_mask": "0x2", 00:37:03.480 "workload": "randrw", 00:37:03.480 "percentage": 50, 00:37:03.480 "status": "finished", 00:37:03.480 "queue_depth": 128, 00:37:03.480 "io_size": 4096, 00:37:03.480 "runtime": 1.005116, 00:37:03.480 "iops": 17071.661380377987, 00:37:03.480 "mibps": 66.68617726710151, 00:37:03.480 "io_failed": 0, 00:37:03.480 "io_timeout": 0, 00:37:03.480 "avg_latency_us": 7481.893479537539, 00:37:03.480 "min_latency_us": 3049.944615384615, 00:37:03.480 "max_latency_us": 17644.30769230769 00:37:03.480 } 00:37:03.480 ], 00:37:03.480 "core_count": 1 00:37:03.480 } 00:37:03.480 12:35:59 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:03.480 12:35:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:03.739 12:35:59 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:37:03.739 12:35:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:03.739 12:35:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:03.739 12:35:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:03.739 12:35:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:03.739 12:35:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:04.000 12:35:59 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:37:04.000 12:35:59 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:37:04.000 12:35:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:04.000 12:35:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:04.000 12:35:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:04.000 12:35:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:04.000 12:35:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:04.000 12:35:59 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:37:04.000 12:35:59 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:04.000 12:35:59 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:37:04.000 12:35:59 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:04.000 12:35:59 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:37:04.000 12:35:59 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:04.000 12:35:59 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:37:04.000 12:35:59 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:04.000 12:35:59 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:04.000 12:35:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:04.258 [2024-11-25 12:36:00.028556] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:04.258 [2024-11-25 12:36:00.029124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10500b0 (107): Transport endpoint is not connected 00:37:04.258 [2024-11-25 12:36:00.030114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10500b0 (9): Bad file descriptor 00:37:04.258 [2024-11-25 12:36:00.031113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:37:04.258 [2024-11-25 12:36:00.031129] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:04.258 [2024-11-25 12:36:00.031136] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:37:04.259 [2024-11-25 12:36:00.031143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:37:04.259 2024/11/25 12:36:00 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:37:04.259 request: 00:37:04.259 { 00:37:04.259 "method": "bdev_nvme_attach_controller", 00:37:04.259 "params": { 00:37:04.259 "name": "nvme0", 00:37:04.259 "trtype": "tcp", 00:37:04.259 "traddr": "127.0.0.1", 00:37:04.259 "adrfam": "ipv4", 00:37:04.259 "trsvcid": "4420", 00:37:04.259 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:04.259 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:04.259 "prchk_reftag": false, 00:37:04.259 "prchk_guard": false, 00:37:04.259 "hdgst": false, 00:37:04.259 "ddgst": false, 00:37:04.259 "psk": "key1", 00:37:04.259 "allow_unrecognized_csi": false 00:37:04.259 } 00:37:04.259 } 00:37:04.259 Got JSON-RPC error response 00:37:04.259 GoRPCClient: error on JSON-RPC call 00:37:04.259 12:36:00 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:37:04.259 12:36:00 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:04.259 12:36:00 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:04.259 12:36:00 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:04.259 12:36:00 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:37:04.259 12:36:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:04.259 12:36:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:04.259 12:36:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:04.259 12:36:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:04.259 12:36:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:04.516 12:36:00 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:37:04.516 12:36:00 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:37:04.516 12:36:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:04.516 12:36:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:04.516 12:36:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:04.516 12:36:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:04.516 12:36:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:04.776 12:36:00 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:37:04.776 12:36:00 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:37:04.776 12:36:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:05.036 12:36:00 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:37:05.036 12:36:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:37:05.036 12:36:00 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:37:05.036 12:36:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:05.036 12:36:00 keyring_file -- keyring/file.sh@78 -- # jq length 00:37:05.299 12:36:01 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:37:05.299 12:36:01 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.cSvGfbUCRK 00:37:05.299 12:36:01 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.cSvGfbUCRK 00:37:05.299 12:36:01 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:37:05.299 12:36:01 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.cSvGfbUCRK 00:37:05.299 12:36:01 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:37:05.299 12:36:01 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:05.299 12:36:01 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:37:05.299 12:36:01 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:05.299 12:36:01 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.cSvGfbUCRK 00:37:05.299 12:36:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.cSvGfbUCRK 00:37:05.559 [2024-11-25 12:36:01.292438] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.cSvGfbUCRK': 0100660 00:37:05.559 [2024-11-25 12:36:01.292468] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:37:05.559 2024/11/25 12:36:01 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.cSvGfbUCRK], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:37:05.559 request: 00:37:05.559 { 00:37:05.559 "method": "keyring_file_add_key", 00:37:05.559 "params": { 00:37:05.559 "name": "key0", 00:37:05.559 "path": "/tmp/tmp.cSvGfbUCRK" 00:37:05.559 } 00:37:05.559 } 00:37:05.559 Got JSON-RPC error response 00:37:05.559 GoRPCClient: error on JSON-RPC call 00:37:05.559 12:36:01 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:37:05.559 12:36:01 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:05.559 12:36:01 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:05.559 12:36:01 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:05.559 12:36:01 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.cSvGfbUCRK 00:37:05.559 12:36:01 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.cSvGfbUCRK 00:37:05.560 12:36:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.cSvGfbUCRK 00:37:05.819 12:36:01 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.cSvGfbUCRK 00:37:05.819 12:36:01 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:37:05.819 12:36:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:05.819 12:36:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:05.819 12:36:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:05.819 12:36:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:05.819 12:36:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:05.819 12:36:01 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:37:05.819 12:36:01 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:05.819 12:36:01 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:37:05.819 12:36:01 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:05.819 12:36:01 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:37:05.819 12:36:01 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:05.819 12:36:01 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:37:05.819 12:36:01 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:05.819 12:36:01 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:05.819 12:36:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:06.080 [2024-11-25 12:36:01.876634] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.cSvGfbUCRK': No such file or directory 00:37:06.080 [2024-11-25 12:36:01.876676] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:37:06.080 [2024-11-25 12:36:01.876692] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:37:06.080 [2024-11-25 12:36:01.876699] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:37:06.080 [2024-11-25 12:36:01.876706] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:06.080 [2024-11-25 12:36:01.876712] bdev_nvme.c:6764:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:37:06.080 2024/11/25 12:36:01 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:37:06.080 request: 00:37:06.080 { 00:37:06.080 "method": "bdev_nvme_attach_controller", 00:37:06.080 "params": { 00:37:06.080 "name": "nvme0", 00:37:06.080 "trtype": "tcp", 00:37:06.080 "traddr": "127.0.0.1", 00:37:06.080 "adrfam": "ipv4", 00:37:06.080 "trsvcid": "4420", 00:37:06.080 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:06.080 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:06.080 "prchk_reftag": false, 00:37:06.080 "prchk_guard": false, 00:37:06.080 "hdgst": false, 00:37:06.080 "ddgst": false, 00:37:06.080 "psk": "key0", 00:37:06.080 "allow_unrecognized_csi": false 00:37:06.080 } 00:37:06.080 } 00:37:06.080 Got JSON-RPC error response 00:37:06.080 GoRPCClient: error on JSON-RPC call 00:37:06.080 12:36:01 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:37:06.080 12:36:01 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:06.080 12:36:01 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:06.080 12:36:01 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:06.080 12:36:01 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:37:06.080 12:36:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:06.338 12:36:02 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:06.338 12:36:02 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:06.338 12:36:02 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:06.338 12:36:02 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:06.338 12:36:02 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:06.338 12:36:02 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:06.338 12:36:02 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.sNqxv9AvRk 00:37:06.338 12:36:02 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:06.338 12:36:02 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:06.338 12:36:02 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:37:06.338 12:36:02 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:06.338 12:36:02 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:37:06.338 12:36:02 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:37:06.338 12:36:02 keyring_file -- nvmf/common.sh@733 -- # python - 00:37:06.338 12:36:02 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.sNqxv9AvRk 00:37:06.338 12:36:02 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.sNqxv9AvRk 00:37:06.338 12:36:02 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.sNqxv9AvRk 00:37:06.338 12:36:02 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.sNqxv9AvRk 00:37:06.338 12:36:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.sNqxv9AvRk 00:37:06.595 12:36:02 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:06.595 12:36:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:06.852 nvme0n1 00:37:06.852 12:36:02 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:37:06.852 12:36:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:06.852 12:36:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:06.852 12:36:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:06.852 12:36:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:06.852 12:36:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:07.110 12:36:02 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:37:07.110 12:36:02 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:37:07.110 12:36:02 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:07.367 12:36:03 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:37:07.367 12:36:03 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:37:07.367 12:36:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:07.367 12:36:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:07.367 12:36:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:07.367 12:36:03 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:37:07.367 12:36:03 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:37:07.367 12:36:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:07.367 12:36:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:07.367 12:36:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:07.367 12:36:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:07.367 12:36:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:07.624 12:36:03 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:37:07.624 12:36:03 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:07.624 12:36:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:07.882 12:36:03 keyring_file -- keyring/file.sh@105 -- # jq length 00:37:07.882 12:36:03 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:37:07.882 12:36:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:08.140 12:36:03 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:37:08.140 12:36:03 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.sNqxv9AvRk 00:37:08.140 12:36:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.sNqxv9AvRk 00:37:08.398 12:36:04 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.CHR5zhv1lk 00:37:08.398 12:36:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.CHR5zhv1lk 00:37:08.398 12:36:04 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:08.398 12:36:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:08.656 nvme0n1 00:37:08.656 12:36:04 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:37:08.656 12:36:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:37:08.914 12:36:04 keyring_file -- keyring/file.sh@113 -- # config='{ 00:37:08.914 "subsystems": [ 00:37:08.914 { 00:37:08.914 "subsystem": "keyring", 00:37:08.914 "config": [ 00:37:08.914 { 00:37:08.914 "method": "keyring_file_add_key", 00:37:08.914 "params": { 00:37:08.914 "name": "key0", 00:37:08.914 "path": "/tmp/tmp.sNqxv9AvRk" 00:37:08.914 } 00:37:08.914 }, 00:37:08.914 { 00:37:08.914 "method": "keyring_file_add_key", 00:37:08.914 "params": { 00:37:08.914 "name": "key1", 00:37:08.914 "path": "/tmp/tmp.CHR5zhv1lk" 00:37:08.914 } 00:37:08.914 } 00:37:08.914 ] 00:37:08.914 }, 00:37:08.914 { 00:37:08.914 "subsystem": "iobuf", 00:37:08.914 "config": [ 00:37:08.914 { 00:37:08.914 "method": "iobuf_set_options", 00:37:08.914 "params": { 00:37:08.914 "enable_numa": false, 00:37:08.914 "large_bufsize": 135168, 00:37:08.914 "large_pool_count": 1024, 00:37:08.914 "small_bufsize": 8192, 00:37:08.914 "small_pool_count": 8192 00:37:08.914 } 00:37:08.914 } 00:37:08.914 ] 00:37:08.914 }, 00:37:08.914 { 00:37:08.914 "subsystem": "sock", 00:37:08.914 "config": [ 00:37:08.914 { 00:37:08.914 "method": "sock_set_default_impl", 00:37:08.914 "params": { 00:37:08.914 "impl_name": "posix" 00:37:08.914 } 00:37:08.914 }, 00:37:08.914 { 00:37:08.914 "method": "sock_impl_set_options", 00:37:08.914 "params": { 00:37:08.914 "enable_ktls": false, 00:37:08.914 "enable_placement_id": 0, 00:37:08.914 "enable_quickack": false, 00:37:08.914 "enable_recv_pipe": true, 00:37:08.914 "enable_zerocopy_send_client": false, 00:37:08.914 "enable_zerocopy_send_server": true, 00:37:08.914 "impl_name": "ssl", 00:37:08.914 "recv_buf_size": 4096, 00:37:08.914 "send_buf_size": 4096, 00:37:08.914 "tls_version": 0, 00:37:08.914 "zerocopy_threshold": 0 00:37:08.914 } 00:37:08.914 }, 00:37:08.914 { 00:37:08.914 "method": "sock_impl_set_options", 00:37:08.914 "params": { 00:37:08.914 "enable_ktls": false, 00:37:08.914 "enable_placement_id": 0, 00:37:08.914 "enable_quickack": false, 00:37:08.914 "enable_recv_pipe": true, 00:37:08.914 "enable_zerocopy_send_client": false, 00:37:08.914 "enable_zerocopy_send_server": true, 00:37:08.914 "impl_name": "posix", 00:37:08.914 "recv_buf_size": 2097152, 00:37:08.914 "send_buf_size": 2097152, 00:37:08.914 "tls_version": 0, 00:37:08.914 "zerocopy_threshold": 0 00:37:08.914 } 00:37:08.914 } 00:37:08.914 ] 00:37:08.914 }, 00:37:08.914 { 00:37:08.914 "subsystem": "vmd", 00:37:08.914 "config": [] 00:37:08.914 }, 00:37:08.914 { 00:37:08.914 "subsystem": "accel", 00:37:08.914 "config": [ 00:37:08.914 { 00:37:08.914 "method": "accel_set_options", 00:37:08.914 "params": { 00:37:08.914 "buf_count": 2048, 00:37:08.914 "large_cache_size": 16, 00:37:08.914 "sequence_count": 2048, 00:37:08.914 "small_cache_size": 128, 00:37:08.914 "task_count": 2048 00:37:08.914 } 00:37:08.914 } 00:37:08.914 ] 00:37:08.914 }, 00:37:08.914 { 00:37:08.914 "subsystem": "bdev", 00:37:08.914 "config": [ 00:37:08.914 { 00:37:08.914 "method": "bdev_set_options", 00:37:08.914 "params": { 00:37:08.914 "bdev_auto_examine": true, 00:37:08.914 "bdev_io_cache_size": 256, 00:37:08.914 "bdev_io_pool_size": 65535, 00:37:08.914 "iobuf_large_cache_size": 16, 00:37:08.914 "iobuf_small_cache_size": 128 00:37:08.914 } 00:37:08.914 }, 00:37:08.914 { 00:37:08.914 "method": "bdev_raid_set_options", 00:37:08.914 "params": { 00:37:08.914 "process_max_bandwidth_mb_sec": 0, 00:37:08.914 "process_window_size_kb": 1024 00:37:08.914 } 00:37:08.914 }, 00:37:08.914 { 00:37:08.914 "method": "bdev_iscsi_set_options", 00:37:08.914 "params": { 00:37:08.914 "timeout_sec": 30 00:37:08.914 } 00:37:08.914 }, 00:37:08.914 { 00:37:08.914 "method": "bdev_nvme_set_options", 00:37:08.914 "params": { 00:37:08.914 "action_on_timeout": "none", 00:37:08.914 "allow_accel_sequence": false, 00:37:08.914 "arbitration_burst": 0, 00:37:08.914 "bdev_retry_count": 3, 00:37:08.914 "ctrlr_loss_timeout_sec": 0, 00:37:08.914 "delay_cmd_submit": true, 00:37:08.914 "dhchap_dhgroups": [ 00:37:08.914 "null", 00:37:08.914 "ffdhe2048", 00:37:08.914 "ffdhe3072", 00:37:08.914 "ffdhe4096", 00:37:08.914 "ffdhe6144", 00:37:08.914 "ffdhe8192" 00:37:08.914 ], 00:37:08.914 "dhchap_digests": [ 00:37:08.914 "sha256", 00:37:08.914 "sha384", 00:37:08.914 "sha512" 00:37:08.914 ], 00:37:08.914 "disable_auto_failback": false, 00:37:08.914 "fast_io_fail_timeout_sec": 0, 00:37:08.914 "generate_uuids": false, 00:37:08.914 "high_priority_weight": 0, 00:37:08.914 "io_path_stat": false, 00:37:08.914 "io_queue_requests": 512, 00:37:08.914 "keep_alive_timeout_ms": 10000, 00:37:08.914 "low_priority_weight": 0, 00:37:08.914 "medium_priority_weight": 0, 00:37:08.914 "nvme_adminq_poll_period_us": 10000, 00:37:08.914 "nvme_error_stat": false, 00:37:08.914 "nvme_ioq_poll_period_us": 0, 00:37:08.914 "rdma_cm_event_timeout_ms": 0, 00:37:08.914 "rdma_max_cq_size": 0, 00:37:08.914 "rdma_srq_size": 0, 00:37:08.914 "reconnect_delay_sec": 0, 00:37:08.914 "timeout_admin_us": 0, 00:37:08.914 "timeout_us": 0, 00:37:08.914 "transport_ack_timeout": 0, 00:37:08.914 "transport_retry_count": 4, 00:37:08.914 "transport_tos": 0 00:37:08.914 } 00:37:08.914 }, 00:37:08.914 { 00:37:08.914 "method": "bdev_nvme_attach_controller", 00:37:08.914 "params": { 00:37:08.914 "adrfam": "IPv4", 00:37:08.914 "ctrlr_loss_timeout_sec": 0, 00:37:08.914 "ddgst": false, 00:37:08.914 "fast_io_fail_timeout_sec": 0, 00:37:08.914 "hdgst": false, 00:37:08.914 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:08.914 "multipath": "multipath", 00:37:08.914 "name": "nvme0", 00:37:08.914 "prchk_guard": false, 00:37:08.914 "prchk_reftag": false, 00:37:08.914 "psk": "key0", 00:37:08.914 "reconnect_delay_sec": 0, 00:37:08.914 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:08.914 "traddr": "127.0.0.1", 00:37:08.914 "trsvcid": "4420", 00:37:08.914 "trtype": "TCP" 00:37:08.914 } 00:37:08.914 }, 00:37:08.914 { 00:37:08.914 "method": "bdev_nvme_set_hotplug", 00:37:08.914 "params": { 00:37:08.914 "enable": false, 00:37:08.914 "period_us": 100000 00:37:08.914 } 00:37:08.914 }, 00:37:08.914 { 00:37:08.914 "method": "bdev_wait_for_examine" 00:37:08.914 } 00:37:08.914 ] 00:37:08.914 }, 00:37:08.914 { 00:37:08.914 "subsystem": "nbd", 00:37:08.914 "config": [] 00:37:08.914 } 00:37:08.914 ] 00:37:08.914 }' 00:37:08.914 12:36:04 keyring_file -- keyring/file.sh@115 -- # killprocess 109985 00:37:08.914 12:36:04 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 109985 ']' 00:37:08.914 12:36:04 keyring_file -- common/autotest_common.sh@958 -- # kill -0 109985 00:37:08.914 12:36:04 keyring_file -- common/autotest_common.sh@959 -- # uname 00:37:08.914 12:36:04 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:08.914 12:36:04 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109985 00:37:08.914 killing process with pid 109985 00:37:08.914 Received shutdown signal, test time was about 1.000000 seconds 00:37:08.915 00:37:08.915 Latency(us) 00:37:08.915 [2024-11-25T12:36:04.791Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:08.915 [2024-11-25T12:36:04.791Z] =================================================================================================================== 00:37:08.915 [2024-11-25T12:36:04.791Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:08.915 12:36:04 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:08.915 12:36:04 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:08.915 12:36:04 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109985' 00:37:08.915 12:36:04 keyring_file -- common/autotest_common.sh@973 -- # kill 109985 00:37:08.915 12:36:04 keyring_file -- common/autotest_common.sh@978 -- # wait 109985 00:37:09.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:09.173 12:36:04 keyring_file -- keyring/file.sh@118 -- # bperfpid=110436 00:37:09.173 12:36:04 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:37:09.173 12:36:04 keyring_file -- keyring/file.sh@120 -- # waitforlisten 110436 /var/tmp/bperf.sock 00:37:09.173 12:36:04 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 110436 ']' 00:37:09.173 12:36:04 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:09.173 12:36:04 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:09.173 12:36:04 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:09.173 12:36:04 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:09.173 12:36:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:09.173 12:36:04 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:37:09.173 "subsystems": [ 00:37:09.173 { 00:37:09.173 "subsystem": "keyring", 00:37:09.173 "config": [ 00:37:09.173 { 00:37:09.173 "method": "keyring_file_add_key", 00:37:09.173 "params": { 00:37:09.173 "name": "key0", 00:37:09.173 "path": "/tmp/tmp.sNqxv9AvRk" 00:37:09.173 } 00:37:09.173 }, 00:37:09.173 { 00:37:09.173 "method": "keyring_file_add_key", 00:37:09.173 "params": { 00:37:09.173 "name": "key1", 00:37:09.173 "path": "/tmp/tmp.CHR5zhv1lk" 00:37:09.173 } 00:37:09.173 } 00:37:09.173 ] 00:37:09.173 }, 00:37:09.173 { 00:37:09.173 "subsystem": "iobuf", 00:37:09.173 "config": [ 00:37:09.173 { 00:37:09.173 "method": "iobuf_set_options", 00:37:09.173 "params": { 00:37:09.173 "enable_numa": false, 00:37:09.173 "large_bufsize": 135168, 00:37:09.173 "large_pool_count": 1024, 00:37:09.173 "small_bufsize": 8192, 00:37:09.173 "small_pool_count": 8192 00:37:09.173 } 00:37:09.173 } 00:37:09.173 ] 00:37:09.173 }, 00:37:09.173 { 00:37:09.173 "subsystem": "sock", 00:37:09.173 "config": [ 00:37:09.173 { 00:37:09.173 "method": "sock_set_default_impl", 00:37:09.173 "params": { 00:37:09.173 "impl_name": "posix" 00:37:09.173 } 00:37:09.173 }, 00:37:09.173 { 00:37:09.173 "method": "sock_impl_set_options", 00:37:09.173 "params": { 00:37:09.173 "enable_ktls": false, 00:37:09.173 "enable_placement_id": 0, 00:37:09.173 "enable_quickack": false, 00:37:09.173 "enable_recv_pipe": true, 00:37:09.173 "enable_zerocopy_send_client": false, 00:37:09.173 "enable_zerocopy_send_server": true, 00:37:09.173 "impl_name": "ssl", 00:37:09.173 "recv_buf_size": 4096, 00:37:09.173 "send_buf_size": 4096, 00:37:09.173 "tls_version": 0, 00:37:09.173 "zerocopy_threshold": 0 00:37:09.173 } 00:37:09.173 }, 00:37:09.173 { 00:37:09.173 "method": "sock_impl_set_options", 00:37:09.173 "params": { 00:37:09.173 "enable_ktls": false, 00:37:09.173 "enable_placement_id": 0, 00:37:09.173 "enable_quickack": false, 00:37:09.173 "enable_recv_pipe": true, 00:37:09.173 "enable_zerocopy_send_client": false, 00:37:09.173 "enable_zerocopy_send_server": true, 00:37:09.173 "impl_name": "posix", 00:37:09.173 "recv_buf_size": 2097152, 00:37:09.173 "send_buf_size": 2097152, 00:37:09.173 "tls_version": 0, 00:37:09.173 "zerocopy_threshold": 0 00:37:09.173 } 00:37:09.173 } 00:37:09.173 ] 00:37:09.173 }, 00:37:09.173 { 00:37:09.173 "subsystem": "vmd", 00:37:09.173 "config": [] 00:37:09.173 }, 00:37:09.173 { 00:37:09.173 "subsystem": "accel", 00:37:09.174 "config": [ 00:37:09.174 { 00:37:09.174 "method": "accel_set_options", 00:37:09.174 "params": { 00:37:09.174 "buf_count": 2048, 00:37:09.174 "large_cache_size": 16, 00:37:09.174 "sequence_count": 2048, 00:37:09.174 "small_cache_size": 128, 00:37:09.174 "task_count": 2048 00:37:09.174 } 00:37:09.174 } 00:37:09.174 ] 00:37:09.174 }, 00:37:09.174 { 00:37:09.174 "subsystem": "bdev", 00:37:09.174 "config": [ 00:37:09.174 { 00:37:09.174 "method": "bdev_set_options", 00:37:09.174 "params": { 00:37:09.174 "bdev_auto_examine": true, 00:37:09.174 "bdev_io_cache_size": 256, 00:37:09.174 "bdev_io_pool_size": 65535, 00:37:09.174 "iobuf_large_cache_size": 16, 00:37:09.174 "iobuf_small_cache_size": 128 00:37:09.174 } 00:37:09.174 }, 00:37:09.174 { 00:37:09.174 "method": "bdev_raid_set_options", 00:37:09.174 "params": { 00:37:09.174 "process_max_bandwidth_mb_sec": 0, 00:37:09.174 "process_window_size_kb": 1024 00:37:09.174 } 00:37:09.174 }, 00:37:09.174 { 00:37:09.174 "method": "bdev_iscsi_set_options", 00:37:09.174 "params": { 00:37:09.174 "timeout_sec": 30 00:37:09.174 } 00:37:09.174 }, 00:37:09.174 { 00:37:09.174 "method": "bdev_nvme_set_options", 00:37:09.174 "params": { 00:37:09.174 "action_on_timeout": "none", 00:37:09.174 "allow_accel_sequence": false, 00:37:09.174 "arbitration_burst": 0, 00:37:09.174 "bdev_retry_count": 3, 00:37:09.174 "ctrlr_loss_timeout_sec": 0, 00:37:09.174 "delay_cmd_submit": true, 00:37:09.174 "dhchap_dhgroups": [ 00:37:09.174 "null", 00:37:09.174 "ffdhe2048", 00:37:09.174 "ffdhe3072", 00:37:09.174 "ffdhe4096", 00:37:09.174 "ffdhe6144", 00:37:09.174 "ffdhe8192" 00:37:09.174 ], 00:37:09.174 "dhchap_digests": [ 00:37:09.174 "sha256", 00:37:09.174 "sha384", 00:37:09.174 "sha512" 00:37:09.174 ], 00:37:09.174 "disable_auto_failback": false, 00:37:09.174 "fast_io_fail_timeout_sec": 0, 00:37:09.174 "generate_uuids": false, 00:37:09.174 "high_priority_weight": 0, 00:37:09.174 "io_path_stat": false, 00:37:09.174 "io_queue_requests": 512, 00:37:09.174 "keep_alive_timeout_ms": 10000, 00:37:09.174 "low_priority_weight": 0, 00:37:09.174 "medium_priority_weight": 0, 00:37:09.174 "nvme_adminq_poll_period_us": 10000, 00:37:09.174 "nvme_error_stat": false, 00:37:09.174 "nvme_ioq_poll_period_us": 0, 00:37:09.174 "rdma_cm_event_timeout_ms": 0, 00:37:09.174 "rdma_max_cq_size": 0, 00:37:09.174 "rdma_srq_size": 0, 00:37:09.174 "reconnect_delay_sec": 0, 00:37:09.174 "timeout_admin_us": 0, 00:37:09.174 "timeout_us": 0, 00:37:09.174 "transport_ack_timeout": 0, 00:37:09.174 "transport_retry_count": 4, 00:37:09.174 "transport_tos": 0 00:37:09.174 } 00:37:09.174 }, 00:37:09.174 { 00:37:09.174 "method": "bdev_nvme_attach_controller", 00:37:09.174 "params": { 00:37:09.174 "adrfam": "IPv4", 00:37:09.174 "ctrlr_loss_timeout_sec": 0, 00:37:09.174 "ddgst": false, 00:37:09.174 "fast_io_fail_timeout_sec": 0, 00:37:09.174 "hdgst": false, 00:37:09.174 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:09.174 "multipath": "multipath", 00:37:09.174 "name": "nvme0", 00:37:09.174 "prchk_guard": false, 00:37:09.174 "prchk_reftag": false, 00:37:09.174 "psk": "key0", 00:37:09.174 "reconnect_delay_sec": 0, 00:37:09.174 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:09.174 "traddr": "127.0.0.1", 00:37:09.174 "trsvcid": "4420", 00:37:09.174 "trtype": "TCP" 00:37:09.174 } 00:37:09.174 }, 00:37:09.174 { 00:37:09.174 "method": "bdev_nvme_set_hotplug", 00:37:09.174 "params": { 00:37:09.174 "enable": false, 00:37:09.174 "period_us": 100000 00:37:09.174 } 00:37:09.174 }, 00:37:09.174 { 00:37:09.174 "method": "bdev_wait_for_examine" 00:37:09.174 } 00:37:09.174 ] 00:37:09.174 }, 00:37:09.174 { 00:37:09.174 "subsystem": "nbd", 00:37:09.174 "config": [] 00:37:09.174 } 00:37:09.174 ] 00:37:09.174 }' 00:37:09.174 [2024-11-25 12:36:04.910854] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:37:09.174 [2024-11-25 12:36:04.911047] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110436 ] 00:37:09.174 [2024-11-25 12:36:05.046206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:09.432 [2024-11-25 12:36:05.083938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:09.432 [2024-11-25 12:36:05.242738] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:09.997 12:36:05 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:09.997 12:36:05 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:37:09.997 12:36:05 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:37:09.997 12:36:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:09.997 12:36:05 keyring_file -- keyring/file.sh@121 -- # jq length 00:37:10.255 12:36:06 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:37:10.255 12:36:06 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:37:10.255 12:36:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:10.255 12:36:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:10.255 12:36:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:10.255 12:36:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:10.255 12:36:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:10.512 12:36:06 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:37:10.512 12:36:06 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:37:10.512 12:36:06 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:10.512 12:36:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:10.512 12:36:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:10.512 12:36:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:10.512 12:36:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:10.770 12:36:06 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:37:10.770 12:36:06 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:37:10.770 12:36:06 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:37:10.770 12:36:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:37:10.770 12:36:06 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:37:10.770 12:36:06 keyring_file -- keyring/file.sh@1 -- # cleanup 00:37:10.770 12:36:06 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.sNqxv9AvRk /tmp/tmp.CHR5zhv1lk 00:37:10.770 12:36:06 keyring_file -- keyring/file.sh@20 -- # killprocess 110436 00:37:10.770 12:36:06 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 110436 ']' 00:37:10.770 12:36:06 keyring_file -- common/autotest_common.sh@958 -- # kill -0 110436 00:37:11.030 12:36:06 keyring_file -- common/autotest_common.sh@959 -- # uname 00:37:11.030 12:36:06 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:11.030 12:36:06 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110436 00:37:11.030 killing process with pid 110436 00:37:11.030 Received shutdown signal, test time was about 1.000000 seconds 00:37:11.030 00:37:11.030 Latency(us) 00:37:11.030 [2024-11-25T12:36:06.906Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:11.030 [2024-11-25T12:36:06.906Z] =================================================================================================================== 00:37:11.030 [2024-11-25T12:36:06.906Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:11.030 12:36:06 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:11.030 12:36:06 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:11.030 12:36:06 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110436' 00:37:11.030 12:36:06 keyring_file -- common/autotest_common.sh@973 -- # kill 110436 00:37:11.030 12:36:06 keyring_file -- common/autotest_common.sh@978 -- # wait 110436 00:37:11.030 12:36:06 keyring_file -- keyring/file.sh@21 -- # killprocess 109954 00:37:11.030 12:36:06 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 109954 ']' 00:37:11.030 12:36:06 keyring_file -- common/autotest_common.sh@958 -- # kill -0 109954 00:37:11.030 12:36:06 keyring_file -- common/autotest_common.sh@959 -- # uname 00:37:11.030 12:36:06 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:11.030 12:36:06 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109954 00:37:11.030 killing process with pid 109954 00:37:11.030 12:36:06 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:11.030 12:36:06 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:11.030 12:36:06 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109954' 00:37:11.030 12:36:06 keyring_file -- common/autotest_common.sh@973 -- # kill 109954 00:37:11.030 12:36:06 keyring_file -- common/autotest_common.sh@978 -- # wait 109954 00:37:11.290 00:37:11.290 real 0m12.813s 00:37:11.290 user 0m31.153s 00:37:11.290 sys 0m2.458s 00:37:11.290 12:36:06 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:11.290 12:36:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:11.290 ************************************ 00:37:11.290 END TEST keyring_file 00:37:11.290 ************************************ 00:37:11.290 12:36:07 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:37:11.290 12:36:07 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:37:11.290 12:36:07 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:11.290 12:36:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:11.290 12:36:07 -- common/autotest_common.sh@10 -- # set +x 00:37:11.290 ************************************ 00:37:11.290 START TEST keyring_linux 00:37:11.290 ************************************ 00:37:11.290 12:36:07 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:37:11.290 Joined session keyring: 546462257 00:37:11.290 * Looking for test storage... 00:37:11.290 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:37:11.290 12:36:07 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:11.290 12:36:07 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:11.290 12:36:07 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:37:11.551 12:36:07 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:11.551 12:36:07 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:11.551 12:36:07 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:11.551 12:36:07 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:11.551 12:36:07 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:37:11.551 12:36:07 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:37:11.551 12:36:07 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:37:11.551 12:36:07 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:37:11.551 12:36:07 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:37:11.551 12:36:07 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:37:11.551 12:36:07 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:37:11.551 12:36:07 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:11.551 12:36:07 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:37:11.551 12:36:07 keyring_linux -- scripts/common.sh@345 -- # : 1 00:37:11.551 12:36:07 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:11.551 12:36:07 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:11.551 12:36:07 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:37:11.551 12:36:07 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:37:11.551 12:36:07 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:11.551 12:36:07 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:37:11.551 12:36:07 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:37:11.551 12:36:07 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:37:11.551 12:36:07 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:37:11.551 12:36:07 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:11.551 12:36:07 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:37:11.551 12:36:07 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:37:11.551 12:36:07 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:11.551 12:36:07 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:11.551 12:36:07 keyring_linux -- scripts/common.sh@368 -- # return 0 00:37:11.551 12:36:07 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:11.551 12:36:07 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:11.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:11.552 --rc genhtml_branch_coverage=1 00:37:11.552 --rc genhtml_function_coverage=1 00:37:11.552 --rc genhtml_legend=1 00:37:11.552 --rc geninfo_all_blocks=1 00:37:11.552 --rc geninfo_unexecuted_blocks=1 00:37:11.552 00:37:11.552 ' 00:37:11.552 12:36:07 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:11.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:11.552 --rc genhtml_branch_coverage=1 00:37:11.552 --rc genhtml_function_coverage=1 00:37:11.552 --rc genhtml_legend=1 00:37:11.552 --rc geninfo_all_blocks=1 00:37:11.552 --rc geninfo_unexecuted_blocks=1 00:37:11.552 00:37:11.552 ' 00:37:11.552 12:36:07 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:11.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:11.552 --rc genhtml_branch_coverage=1 00:37:11.552 --rc genhtml_function_coverage=1 00:37:11.552 --rc genhtml_legend=1 00:37:11.552 --rc geninfo_all_blocks=1 00:37:11.552 --rc geninfo_unexecuted_blocks=1 00:37:11.552 00:37:11.552 ' 00:37:11.552 12:36:07 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:11.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:11.552 --rc genhtml_branch_coverage=1 00:37:11.552 --rc genhtml_function_coverage=1 00:37:11.552 --rc genhtml_legend=1 00:37:11.552 --rc geninfo_all_blocks=1 00:37:11.552 --rc geninfo_unexecuted_blocks=1 00:37:11.552 00:37:11.552 ' 00:37:11.552 12:36:07 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:37:11.552 12:36:07 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:37:11.552 12:36:07 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:37:11.552 12:36:07 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:11.552 12:36:07 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:11.552 12:36:07 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:11.552 12:36:07 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:11.552 12:36:07 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:11.552 12:36:07 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:11.552 12:36:07 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:11.552 12:36:07 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:11.552 12:36:07 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:11.552 12:36:07 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:11.552 12:36:07 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5934b511-67a7-4c73-84f0-52fdf36bb79d 00:37:11.552 12:36:07 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5934b511-67a7-4c73-84f0-52fdf36bb79d 00:37:11.552 12:36:07 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:11.552 12:36:07 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:11.552 12:36:07 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:37:11.552 12:36:07 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:11.552 12:36:07 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:37:11.552 12:36:07 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:37:11.552 12:36:07 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:11.552 12:36:07 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:11.552 12:36:07 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:11.552 12:36:07 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:11.552 12:36:07 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:11.552 12:36:07 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:11.552 12:36:07 keyring_linux -- paths/export.sh@5 -- # export PATH 00:37:11.552 12:36:07 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:11.552 12:36:07 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:37:11.552 12:36:07 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:11.552 12:36:07 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:11.552 12:36:07 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:11.552 12:36:07 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:11.552 12:36:07 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:11.552 12:36:07 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:11.552 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:11.552 12:36:07 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:11.552 12:36:07 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:11.552 12:36:07 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:11.552 12:36:07 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:11.552 12:36:07 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:11.552 12:36:07 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:11.552 12:36:07 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:37:11.552 12:36:07 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:37:11.552 12:36:07 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:37:11.552 12:36:07 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:37:11.552 12:36:07 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:11.552 12:36:07 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:37:11.552 12:36:07 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:11.552 12:36:07 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:11.552 12:36:07 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:37:11.552 12:36:07 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:11.552 12:36:07 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:11.552 12:36:07 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:37:11.552 12:36:07 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:11.552 12:36:07 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:37:11.552 12:36:07 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:37:11.552 12:36:07 keyring_linux -- nvmf/common.sh@733 -- # python - 00:37:11.552 12:36:07 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:37:11.552 /tmp/:spdk-test:key0 00:37:11.552 12:36:07 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:37:11.552 12:36:07 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:37:11.552 12:36:07 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:11.552 12:36:07 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:37:11.552 12:36:07 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:11.552 12:36:07 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:11.552 12:36:07 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:37:11.552 12:36:07 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:11.552 12:36:07 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:11.552 12:36:07 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:37:11.552 12:36:07 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:11.552 12:36:07 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:37:11.552 12:36:07 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:37:11.552 12:36:07 keyring_linux -- nvmf/common.sh@733 -- # python - 00:37:11.552 12:36:07 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:37:11.552 /tmp/:spdk-test:key1 00:37:11.552 12:36:07 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:37:11.552 12:36:07 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=110585 00:37:11.552 12:36:07 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:11.552 12:36:07 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 110585 00:37:11.552 12:36:07 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 110585 ']' 00:37:11.552 12:36:07 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:11.552 12:36:07 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:11.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:11.552 12:36:07 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:11.552 12:36:07 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:11.552 12:36:07 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:11.552 [2024-11-25 12:36:07.359398] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:37:11.552 [2024-11-25 12:36:07.359471] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110585 ] 00:37:11.811 [2024-11-25 12:36:07.494805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:11.811 [2024-11-25 12:36:07.528838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:12.376 12:36:08 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:12.376 12:36:08 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:37:12.376 12:36:08 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:37:12.376 12:36:08 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.376 12:36:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:12.376 [2024-11-25 12:36:08.240409] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:12.634 null0 00:37:12.634 [2024-11-25 12:36:08.272384] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:12.635 [2024-11-25 12:36:08.272493] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:12.635 12:36:08 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.635 12:36:08 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:37:12.635 1045958945 00:37:12.635 12:36:08 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:37:12.635 259012316 00:37:12.635 12:36:08 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=110619 00:37:12.635 12:36:08 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 110619 /var/tmp/bperf.sock 00:37:12.635 12:36:08 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 110619 ']' 00:37:12.635 12:36:08 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:37:12.635 12:36:08 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:12.635 12:36:08 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:12.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:12.635 12:36:08 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:12.635 12:36:08 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:12.635 12:36:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:12.635 [2024-11-25 12:36:08.336440] Starting SPDK v25.01-pre git sha1 f1dd81af3 / DPDK 24.03.0 initialization... 00:37:12.635 [2024-11-25 12:36:08.336498] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110619 ] 00:37:12.635 [2024-11-25 12:36:08.473143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:12.892 [2024-11-25 12:36:08.510079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:13.458 12:36:09 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:13.458 12:36:09 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:37:13.458 12:36:09 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:37:13.458 12:36:09 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:37:13.715 12:36:09 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:37:13.715 12:36:09 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:13.973 12:36:09 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:13.973 12:36:09 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:14.230 [2024-11-25 12:36:09.863557] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:14.230 nvme0n1 00:37:14.230 12:36:09 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:37:14.230 12:36:09 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:37:14.230 12:36:09 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:14.230 12:36:09 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:14.230 12:36:09 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:14.230 12:36:09 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:14.488 12:36:10 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:37:14.488 12:36:10 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:14.488 12:36:10 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:37:14.488 12:36:10 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:37:14.488 12:36:10 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:14.488 12:36:10 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:37:14.488 12:36:10 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:14.488 12:36:10 keyring_linux -- keyring/linux.sh@25 -- # sn=1045958945 00:37:14.488 12:36:10 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:37:14.488 12:36:10 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:14.488 12:36:10 keyring_linux -- keyring/linux.sh@26 -- # [[ 1045958945 == \1\0\4\5\9\5\8\9\4\5 ]] 00:37:14.769 12:36:10 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 1045958945 00:37:14.769 12:36:10 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:37:14.769 12:36:10 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:14.769 Running I/O for 1 seconds... 00:37:15.703 21047.00 IOPS, 82.21 MiB/s 00:37:15.703 Latency(us) 00:37:15.703 [2024-11-25T12:36:11.579Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:15.703 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:15.703 nvme0n1 : 1.01 21051.59 82.23 0.00 0.00 6060.47 3629.69 8620.50 00:37:15.703 [2024-11-25T12:36:11.579Z] =================================================================================================================== 00:37:15.703 [2024-11-25T12:36:11.579Z] Total : 21051.59 82.23 0.00 0.00 6060.47 3629.69 8620.50 00:37:15.703 { 00:37:15.703 "results": [ 00:37:15.703 { 00:37:15.703 "job": "nvme0n1", 00:37:15.703 "core_mask": "0x2", 00:37:15.703 "workload": "randread", 00:37:15.703 "status": "finished", 00:37:15.703 "queue_depth": 128, 00:37:15.703 "io_size": 4096, 00:37:15.703 "runtime": 1.00591, 00:37:15.703 "iops": 21051.585131870645, 00:37:15.703 "mibps": 82.2327544213697, 00:37:15.703 "io_failed": 0, 00:37:15.703 "io_timeout": 0, 00:37:15.703 "avg_latency_us": 6060.474857458371, 00:37:15.703 "min_latency_us": 3629.686153846154, 00:37:15.703 "max_latency_us": 8620.504615384616 00:37:15.703 } 00:37:15.703 ], 00:37:15.703 "core_count": 1 00:37:15.703 } 00:37:15.703 12:36:11 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:15.703 12:36:11 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:15.960 12:36:11 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:37:15.960 12:36:11 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:37:15.960 12:36:11 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:15.960 12:36:11 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:15.960 12:36:11 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:15.960 12:36:11 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:16.217 12:36:11 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:37:16.217 12:36:11 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:16.217 12:36:11 keyring_linux -- keyring/linux.sh@23 -- # return 00:37:16.217 12:36:11 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:16.217 12:36:11 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:37:16.217 12:36:11 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:16.217 12:36:11 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:37:16.217 12:36:11 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:16.217 12:36:11 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:37:16.217 12:36:11 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:16.217 12:36:11 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:16.217 12:36:11 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:16.477 [2024-11-25 12:36:12.143796] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:16.477 [2024-11-25 12:36:12.144476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b0e160 (107): Transport endpoint is not connected 00:37:16.477 [2024-11-25 12:36:12.145470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b0e160 (9): Bad file descriptor 00:37:16.477 [2024-11-25 12:36:12.146468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:37:16.477 [2024-11-25 12:36:12.146551] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:16.477 [2024-11-25 12:36:12.146587] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:37:16.477 [2024-11-25 12:36:12.146619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:37:16.477 2024/11/25 12:36:12 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 allow_unrecognized_csi:%!s(bool=false) ddgst:%!s(bool=false) hdgst:%!s(bool=false) hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 prchk_guard:%!s(bool=false) prchk_reftag:%!s(bool=false) psk::spdk-test:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-5 Msg=Input/output error 00:37:16.477 request: 00:37:16.477 { 00:37:16.477 "method": "bdev_nvme_attach_controller", 00:37:16.477 "params": { 00:37:16.477 "name": "nvme0", 00:37:16.477 "trtype": "tcp", 00:37:16.477 "traddr": "127.0.0.1", 00:37:16.477 "adrfam": "ipv4", 00:37:16.477 "trsvcid": "4420", 00:37:16.477 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:16.477 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:16.477 "prchk_reftag": false, 00:37:16.477 "prchk_guard": false, 00:37:16.478 "hdgst": false, 00:37:16.478 "ddgst": false, 00:37:16.478 "psk": ":spdk-test:key1", 00:37:16.478 "allow_unrecognized_csi": false 00:37:16.478 } 00:37:16.478 } 00:37:16.478 Got JSON-RPC error response 00:37:16.478 GoRPCClient: error on JSON-RPC call 00:37:16.478 12:36:12 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:37:16.478 12:36:12 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:16.478 12:36:12 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:16.478 12:36:12 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:16.478 12:36:12 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:37:16.478 12:36:12 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:16.478 12:36:12 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:37:16.478 12:36:12 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:37:16.478 12:36:12 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:37:16.478 12:36:12 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:16.478 12:36:12 keyring_linux -- keyring/linux.sh@33 -- # sn=1045958945 00:37:16.478 12:36:12 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1045958945 00:37:16.478 1 links removed 00:37:16.478 12:36:12 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:16.478 12:36:12 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:37:16.478 12:36:12 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:37:16.478 12:36:12 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:37:16.478 12:36:12 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:37:16.478 12:36:12 keyring_linux -- keyring/linux.sh@33 -- # sn=259012316 00:37:16.478 12:36:12 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 259012316 00:37:16.478 1 links removed 00:37:16.478 12:36:12 keyring_linux -- keyring/linux.sh@41 -- # killprocess 110619 00:37:16.478 12:36:12 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 110619 ']' 00:37:16.478 12:36:12 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 110619 00:37:16.478 12:36:12 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:37:16.478 12:36:12 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:16.478 12:36:12 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110619 00:37:16.478 12:36:12 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:16.478 12:36:12 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:16.478 killing process with pid 110619 00:37:16.478 12:36:12 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110619' 00:37:16.478 12:36:12 keyring_linux -- common/autotest_common.sh@973 -- # kill 110619 00:37:16.478 Received shutdown signal, test time was about 1.000000 seconds 00:37:16.478 00:37:16.478 Latency(us) 00:37:16.478 [2024-11-25T12:36:12.354Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:16.478 [2024-11-25T12:36:12.354Z] =================================================================================================================== 00:37:16.478 [2024-11-25T12:36:12.354Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:16.478 12:36:12 keyring_linux -- common/autotest_common.sh@978 -- # wait 110619 00:37:16.478 12:36:12 keyring_linux -- keyring/linux.sh@42 -- # killprocess 110585 00:37:16.478 12:36:12 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 110585 ']' 00:37:16.478 12:36:12 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 110585 00:37:16.478 12:36:12 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:37:16.478 12:36:12 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:16.478 12:36:12 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110585 00:37:16.478 12:36:12 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:16.478 12:36:12 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:16.478 killing process with pid 110585 00:37:16.478 12:36:12 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110585' 00:37:16.478 12:36:12 keyring_linux -- common/autotest_common.sh@973 -- # kill 110585 00:37:16.478 12:36:12 keyring_linux -- common/autotest_common.sh@978 -- # wait 110585 00:37:16.739 00:37:16.739 real 0m5.465s 00:37:16.739 user 0m10.349s 00:37:16.739 sys 0m1.356s 00:37:16.739 ************************************ 00:37:16.739 END TEST keyring_linux 00:37:16.739 12:36:12 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:16.739 12:36:12 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:16.739 ************************************ 00:37:16.739 12:36:12 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:37:16.739 12:36:12 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:37:16.739 12:36:12 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:37:16.739 12:36:12 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:37:16.739 12:36:12 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:37:16.739 12:36:12 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:37:16.739 12:36:12 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:37:16.739 12:36:12 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:37:16.739 12:36:12 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:37:16.739 12:36:12 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:37:16.739 12:36:12 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:37:16.739 12:36:12 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:37:16.739 12:36:12 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:37:16.739 12:36:12 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:37:16.739 12:36:12 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:37:16.739 12:36:12 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:37:16.739 12:36:12 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:37:16.739 12:36:12 -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:16.739 12:36:12 -- common/autotest_common.sh@10 -- # set +x 00:37:16.739 12:36:12 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:37:16.739 12:36:12 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:37:16.739 12:36:12 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:37:16.739 12:36:12 -- common/autotest_common.sh@10 -- # set +x 00:37:18.645 INFO: APP EXITING 00:37:18.645 INFO: killing all VMs 00:37:18.645 INFO: killing vhost app 00:37:18.645 INFO: EXIT DONE 00:37:18.904 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:37:18.904 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:37:18.904 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:37:19.472 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:37:19.732 Cleaning 00:37:19.732 Removing: /var/run/dpdk/spdk0/config 00:37:19.732 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:19.732 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:19.732 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:19.732 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:19.732 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:19.732 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:19.732 Removing: /var/run/dpdk/spdk1/config 00:37:19.732 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:37:19.732 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:37:19.732 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:37:19.732 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:37:19.732 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:37:19.732 Removing: /var/run/dpdk/spdk1/hugepage_info 00:37:19.732 Removing: /var/run/dpdk/spdk2/config 00:37:19.732 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:37:19.732 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:37:19.732 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:37:19.732 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:37:19.732 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:37:19.732 Removing: /var/run/dpdk/spdk2/hugepage_info 00:37:19.732 Removing: /var/run/dpdk/spdk3/config 00:37:19.732 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:37:19.732 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:37:19.732 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:37:19.732 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:37:19.732 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:37:19.732 Removing: /var/run/dpdk/spdk3/hugepage_info 00:37:19.732 Removing: /var/run/dpdk/spdk4/config 00:37:19.732 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:37:19.732 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:37:19.732 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:37:19.732 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:37:19.732 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:37:19.732 Removing: /var/run/dpdk/spdk4/hugepage_info 00:37:19.732 Removing: /dev/shm/nvmf_trace.0 00:37:19.732 Removing: /dev/shm/spdk_tgt_trace.pid58032 00:37:19.732 Removing: /var/run/dpdk/spdk0 00:37:19.732 Removing: /var/run/dpdk/spdk1 00:37:19.732 Removing: /var/run/dpdk/spdk2 00:37:19.732 Removing: /var/run/dpdk/spdk3 00:37:19.732 Removing: /var/run/dpdk/spdk4 00:37:19.732 Removing: /var/run/dpdk/spdk_pid100309 00:37:19.732 Removing: /var/run/dpdk/spdk_pid100349 00:37:19.732 Removing: /var/run/dpdk/spdk_pid100716 00:37:19.732 Removing: /var/run/dpdk/spdk_pid100761 00:37:19.732 Removing: /var/run/dpdk/spdk_pid101160 00:37:19.732 Removing: /var/run/dpdk/spdk_pid101720 00:37:19.732 Removing: /var/run/dpdk/spdk_pid102143 00:37:19.732 Removing: /var/run/dpdk/spdk_pid103189 00:37:19.732 Removing: /var/run/dpdk/spdk_pid104250 00:37:19.732 Removing: /var/run/dpdk/spdk_pid104362 00:37:19.732 Removing: /var/run/dpdk/spdk_pid104430 00:37:19.732 Removing: /var/run/dpdk/spdk_pid106025 00:37:19.732 Removing: /var/run/dpdk/spdk_pid106345 00:37:19.732 Removing: /var/run/dpdk/spdk_pid106694 00:37:19.732 Removing: /var/run/dpdk/spdk_pid107273 00:37:19.732 Removing: /var/run/dpdk/spdk_pid107285 00:37:19.732 Removing: /var/run/dpdk/spdk_pid107686 00:37:19.732 Removing: /var/run/dpdk/spdk_pid107850 00:37:19.732 Removing: /var/run/dpdk/spdk_pid108008 00:37:19.732 Removing: /var/run/dpdk/spdk_pid108105 00:37:19.732 Removing: /var/run/dpdk/spdk_pid108264 00:37:19.732 Removing: /var/run/dpdk/spdk_pid108373 00:37:19.732 Removing: /var/run/dpdk/spdk_pid109094 00:37:19.732 Removing: /var/run/dpdk/spdk_pid109129 00:37:19.732 Removing: /var/run/dpdk/spdk_pid109164 00:37:19.732 Removing: /var/run/dpdk/spdk_pid109414 00:37:19.732 Removing: /var/run/dpdk/spdk_pid109449 00:37:19.732 Removing: /var/run/dpdk/spdk_pid109483 00:37:19.732 Removing: /var/run/dpdk/spdk_pid109954 00:37:19.732 Removing: /var/run/dpdk/spdk_pid109985 00:37:19.732 Removing: /var/run/dpdk/spdk_pid110436 00:37:19.732 Removing: /var/run/dpdk/spdk_pid110585 00:37:19.732 Removing: /var/run/dpdk/spdk_pid110619 00:37:19.732 Removing: /var/run/dpdk/spdk_pid57879 00:37:19.732 Removing: /var/run/dpdk/spdk_pid58032 00:37:19.732 Removing: /var/run/dpdk/spdk_pid58290 00:37:19.732 Removing: /var/run/dpdk/spdk_pid58383 00:37:19.732 Removing: /var/run/dpdk/spdk_pid58422 00:37:19.732 Removing: /var/run/dpdk/spdk_pid58526 00:37:19.732 Removing: /var/run/dpdk/spdk_pid58556 00:37:19.732 Removing: /var/run/dpdk/spdk_pid58685 00:37:19.733 Removing: /var/run/dpdk/spdk_pid58959 00:37:19.733 Removing: /var/run/dpdk/spdk_pid59132 00:37:19.733 Removing: /var/run/dpdk/spdk_pid59222 00:37:19.733 Removing: /var/run/dpdk/spdk_pid59316 00:37:19.993 Removing: /var/run/dpdk/spdk_pid59408 00:37:19.993 Removing: /var/run/dpdk/spdk_pid59447 00:37:19.993 Removing: /var/run/dpdk/spdk_pid59477 00:37:19.993 Removing: /var/run/dpdk/spdk_pid59546 00:37:19.993 Removing: /var/run/dpdk/spdk_pid59641 00:37:19.993 Removing: /var/run/dpdk/spdk_pid60258 00:37:19.993 Removing: /var/run/dpdk/spdk_pid60311 00:37:19.993 Removing: /var/run/dpdk/spdk_pid60369 00:37:19.993 Removing: /var/run/dpdk/spdk_pid60389 00:37:19.993 Removing: /var/run/dpdk/spdk_pid60451 00:37:19.993 Removing: /var/run/dpdk/spdk_pid60474 00:37:19.993 Removing: /var/run/dpdk/spdk_pid60542 00:37:19.993 Removing: /var/run/dpdk/spdk_pid60570 00:37:19.993 Removing: /var/run/dpdk/spdk_pid60616 00:37:19.993 Removing: /var/run/dpdk/spdk_pid60646 00:37:19.993 Removing: /var/run/dpdk/spdk_pid60692 00:37:19.993 Removing: /var/run/dpdk/spdk_pid60723 00:37:19.993 Removing: /var/run/dpdk/spdk_pid60872 00:37:19.993 Removing: /var/run/dpdk/spdk_pid60902 00:37:19.993 Removing: /var/run/dpdk/spdk_pid60990 00:37:19.993 Removing: /var/run/dpdk/spdk_pid61469 00:37:19.993 Removing: /var/run/dpdk/spdk_pid61852 00:37:19.993 Removing: /var/run/dpdk/spdk_pid64348 00:37:19.993 Removing: /var/run/dpdk/spdk_pid64390 00:37:19.993 Removing: /var/run/dpdk/spdk_pid64757 00:37:19.993 Removing: /var/run/dpdk/spdk_pid64808 00:37:19.993 Removing: /var/run/dpdk/spdk_pid65206 00:37:19.993 Removing: /var/run/dpdk/spdk_pid65782 00:37:19.993 Removing: /var/run/dpdk/spdk_pid66216 00:37:19.993 Removing: /var/run/dpdk/spdk_pid67271 00:37:19.993 Removing: /var/run/dpdk/spdk_pid68348 00:37:19.993 Removing: /var/run/dpdk/spdk_pid68466 00:37:19.993 Removing: /var/run/dpdk/spdk_pid68535 00:37:19.993 Removing: /var/run/dpdk/spdk_pid70142 00:37:19.993 Removing: /var/run/dpdk/spdk_pid70483 00:37:19.993 Removing: /var/run/dpdk/spdk_pid74394 00:37:19.993 Removing: /var/run/dpdk/spdk_pid74816 00:37:19.993 Removing: /var/run/dpdk/spdk_pid75422 00:37:19.993 Removing: /var/run/dpdk/spdk_pid75940 00:37:19.993 Removing: /var/run/dpdk/spdk_pid81420 00:37:19.993 Removing: /var/run/dpdk/spdk_pid81900 00:37:19.993 Removing: /var/run/dpdk/spdk_pid82014 00:37:19.993 Removing: /var/run/dpdk/spdk_pid82159 00:37:19.993 Removing: /var/run/dpdk/spdk_pid82211 00:37:19.993 Removing: /var/run/dpdk/spdk_pid82264 00:37:19.993 Removing: /var/run/dpdk/spdk_pid82311 00:37:19.993 Removing: /var/run/dpdk/spdk_pid82483 00:37:19.993 Removing: /var/run/dpdk/spdk_pid82643 00:37:19.993 Removing: /var/run/dpdk/spdk_pid82922 00:37:19.993 Removing: /var/run/dpdk/spdk_pid83027 00:37:19.993 Removing: /var/run/dpdk/spdk_pid83288 00:37:19.993 Removing: /var/run/dpdk/spdk_pid83402 00:37:19.993 Removing: /var/run/dpdk/spdk_pid83531 00:37:19.993 Removing: /var/run/dpdk/spdk_pid83914 00:37:19.993 Removing: /var/run/dpdk/spdk_pid84379 00:37:19.993 Removing: /var/run/dpdk/spdk_pid84380 00:37:19.993 Removing: /var/run/dpdk/spdk_pid84381 00:37:19.993 Removing: /var/run/dpdk/spdk_pid84659 00:37:19.993 Removing: /var/run/dpdk/spdk_pid84933 00:37:19.993 Removing: /var/run/dpdk/spdk_pid85363 00:37:19.993 Removing: /var/run/dpdk/spdk_pid85743 00:37:19.993 Removing: /var/run/dpdk/spdk_pid86346 00:37:19.993 Removing: /var/run/dpdk/spdk_pid86348 00:37:19.993 Removing: /var/run/dpdk/spdk_pid86738 00:37:19.993 Removing: /var/run/dpdk/spdk_pid86752 00:37:19.993 Removing: /var/run/dpdk/spdk_pid86772 00:37:19.993 Removing: /var/run/dpdk/spdk_pid86802 00:37:19.993 Removing: /var/run/dpdk/spdk_pid86807 00:37:19.993 Removing: /var/run/dpdk/spdk_pid87213 00:37:19.993 Removing: /var/run/dpdk/spdk_pid87262 00:37:19.993 Removing: /var/run/dpdk/spdk_pid87646 00:37:19.993 Removing: /var/run/dpdk/spdk_pid87907 00:37:19.993 Removing: /var/run/dpdk/spdk_pid88443 00:37:19.993 Removing: /var/run/dpdk/spdk_pid89086 00:37:19.993 Removing: /var/run/dpdk/spdk_pid90451 00:37:19.993 Removing: /var/run/dpdk/spdk_pid91115 00:37:19.993 Removing: /var/run/dpdk/spdk_pid91117 00:37:19.993 Removing: /var/run/dpdk/spdk_pid93155 00:37:19.993 Removing: /var/run/dpdk/spdk_pid93244 00:37:19.993 Removing: /var/run/dpdk/spdk_pid93331 00:37:19.993 Removing: /var/run/dpdk/spdk_pid93415 00:37:19.993 Removing: /var/run/dpdk/spdk_pid93574 00:37:19.993 Removing: /var/run/dpdk/spdk_pid93657 00:37:19.993 Removing: /var/run/dpdk/spdk_pid93742 00:37:19.993 Removing: /var/run/dpdk/spdk_pid93833 00:37:19.993 Removing: /var/run/dpdk/spdk_pid94225 00:37:19.993 Removing: /var/run/dpdk/spdk_pid95004 00:37:19.993 Removing: /var/run/dpdk/spdk_pid96399 00:37:19.993 Removing: /var/run/dpdk/spdk_pid96599 00:37:19.993 Removing: /var/run/dpdk/spdk_pid96896 00:37:19.993 Removing: /var/run/dpdk/spdk_pid97440 00:37:19.993 Removing: /var/run/dpdk/spdk_pid97834 00:37:19.993 Clean 00:37:20.254 12:36:15 -- common/autotest_common.sh@1453 -- # return 0 00:37:20.254 12:36:15 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:37:20.254 12:36:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:20.254 12:36:15 -- common/autotest_common.sh@10 -- # set +x 00:37:20.254 12:36:15 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:37:20.254 12:36:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:20.254 12:36:15 -- common/autotest_common.sh@10 -- # set +x 00:37:20.254 12:36:15 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:37:20.254 12:36:15 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:37:20.254 12:36:15 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:37:20.254 12:36:15 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:37:20.254 12:36:15 -- spdk/autotest.sh@398 -- # hostname 00:37:20.254 12:36:15 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:37:20.513 geninfo: WARNING: invalid characters removed from testname! 00:37:47.066 12:36:39 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:48.004 12:36:43 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:50.548 12:36:46 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:53.095 12:36:48 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:55.774 12:36:51 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:37:58.319 12:36:53 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:38:00.864 12:36:56 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:38:00.864 12:36:56 -- spdk/autorun.sh@1 -- $ timing_finish 00:38:00.864 12:36:56 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:38:00.864 12:36:56 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:00.864 12:36:56 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:38:00.864 12:36:56 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:38:00.864 + [[ -n 4987 ]] 00:38:00.864 + sudo kill 4987 00:38:00.875 [Pipeline] } 00:38:00.889 [Pipeline] // timeout 00:38:00.894 [Pipeline] } 00:38:00.909 [Pipeline] // stage 00:38:00.914 [Pipeline] } 00:38:00.928 [Pipeline] // catchError 00:38:00.937 [Pipeline] stage 00:38:00.940 [Pipeline] { (Stop VM) 00:38:00.952 [Pipeline] sh 00:38:01.233 + vagrant halt 00:38:03.774 ==> default: Halting domain... 00:38:09.132 [Pipeline] sh 00:38:09.436 + vagrant destroy -f 00:38:11.981 ==> default: Removing domain... 00:38:12.253 [Pipeline] sh 00:38:12.534 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:38:12.544 [Pipeline] } 00:38:12.559 [Pipeline] // stage 00:38:12.564 [Pipeline] } 00:38:12.578 [Pipeline] // dir 00:38:12.583 [Pipeline] } 00:38:12.597 [Pipeline] // wrap 00:38:12.603 [Pipeline] } 00:38:12.615 [Pipeline] // catchError 00:38:12.624 [Pipeline] stage 00:38:12.626 [Pipeline] { (Epilogue) 00:38:12.638 [Pipeline] sh 00:38:12.923 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:18.314 [Pipeline] catchError 00:38:18.315 [Pipeline] { 00:38:18.325 [Pipeline] sh 00:38:18.610 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:18.610 Artifacts sizes are good 00:38:18.620 [Pipeline] } 00:38:18.632 [Pipeline] // catchError 00:38:18.641 [Pipeline] archiveArtifacts 00:38:18.646 Archiving artifacts 00:38:18.741 [Pipeline] cleanWs 00:38:18.754 [WS-CLEANUP] Deleting project workspace... 00:38:18.754 [WS-CLEANUP] Deferred wipeout is used... 00:38:18.761 [WS-CLEANUP] done 00:38:18.763 [Pipeline] } 00:38:18.776 [Pipeline] // stage 00:38:18.780 [Pipeline] } 00:38:18.793 [Pipeline] // node 00:38:18.798 [Pipeline] End of Pipeline 00:38:18.825 Finished: SUCCESS